Updated SGD Regression & Classification with user-configurable loss functions#76
Updated SGD Regression & Classification with user-configurable loss functions#76yj14n9xyz wants to merge 4 commits intoCogComp:masterfrom
Conversation
| * classifier gets the empty string. | ||
| * | ||
| * @param r The desired learning rate value. | ||
| * @param r The desired learning rate value. |
There was a problem hiding this comment.
Could you run the formatter plugin?
mvn formater:format
There was a problem hiding this comment.
Please see the updated 3 files.
| double multiplier = learningRate * (labelValue - wtx); | ||
| weightVector.scaledAdd(exampleFeatures, exampleValues, multiplier); | ||
| bias += multiplier; | ||
| } else { |
There was a problem hiding this comment.
hmm ... not sure about this. Hinge loss usually makes more sense in classification problems.
There was a problem hiding this comment.
I can remove the configuration, using hinge loss in SGD classification and lms in SGD regression. Is it better?
There was a problem hiding this comment.
That's better.
Even better than that is: having a single SGD which works for both classification and regression, and the user has the option of setting either LMS or Hinge, depending on what they need (classification/regression/etc).
There was a problem hiding this comment.
I can get the first part working soon. The latter may need further deliberation. Thanks!
|
@YimingJiang is this still work in progress? |
|
@danyaljj I started working two months ago. Finally working hours are getting better now. I am continuing working on it now. Will get an update soon. |
@danyaljj @christos-c Please review this PR.