[ 3 / biz / cgl / ck / diy / fa / ic / jp / lit / sci / vr / vt ] [ index / top / reports ] [ become a patron ] [ status ]
2023-11: Warosu is now out of extended maintenance.

/biz/ - Business & Finance


View post   

File: 202 KB, 425x390, 1535267496384.png [View same] [iqdb] [saucenao] [google]
18399861 No.18399861 [Reply] [Original]

can anyone help me do my machine learning assignment? I'll pay you 10 LINK

>> No.18400143

>>18399861
go to /g/

>> No.18400549

Unironic data scientist/machine learning engineer here. Shoot me an email:

sunglasses.engi@gmail.com

>> No.18400576

just cum on it and turn it in

>> No.18400604

>>18399861
>>18400549
Or post here Either way.

>> No.18400622

Dude, “Hello, World!” is an easy assignment

>> No.18400670

>>18400549
Recent college grad here looking for a job. Using this quarantine to learn some new skills. What would you reccomend doing to get a machine learning job? I feel its one of those hyped fields that'll be increasing the next few years and Ive done some neural network stuff before

>> No.18400693

>>18400670
Study optimization algorithms and applied math. Anybody can use sklearn, but only a true wizard make bespoke models in numpy. Also be prepared to put up with a lot of bullshit from room temp IQ MBAs.

>> No.18400695

>>18399861
Post here I'll do it for free

>> No.18400702

>>18399861
The Best I can do is a polynomial regression using scikit learn and I'll need to relook some information. From what I learned "machine learning" is just drawing lines of best fit on data.

>> No.18400717

>>18400702
Yeah pretty much.

>> No.18400730

>>18400695
I'll do it for less than free, so take that.

>> No.18400735

ML engineer here. Most the assignment so I can laugh at you for being too stupid to do it.

>> No.18400740

Fug, how many of us are there?

>> No.18400775

>>18400622
Not if you're Craig Wright

>> No.18400817

>>18400693
Thanks anon. Ill be sure to look into all of that.

>> No.18400920

>>18400604
>>18400695
>>18400702
>>18400735
wow thanks guys. it is basically taking a stochastic gradient descent approach to linear and polynomial regression. problem is I can't just ask for answers so I am paying LINK to help me understand

>> No.18400977

>>18400920
When you say polynomial regression, do you mean fitting a model where weights are multiplied in certain operations, or just a linear regression where you calculate powers of your input features?

Anyway just use some framework with autograd.

Or better yet, put your big boy pants on, write out your loss function and take some matrix derivatives.

>> No.18401002
File: 205 KB, 659x525, 17E0EE54-40B7-4017-AEF7-E6057841A4F2.png [View same] [iqdb] [saucenao] [google]
18401002

>>18399861
Don’t do your machine’s homework. How is he gunna learn on his own?

>> No.18401051

>>18400977
I think it is just a linear regression where you calculate powers of certain input features
haven't learned about autograd but it uses AdaGrad algorithm whatever that means

>> No.18401091
File: 199 KB, 997x556, 1553649056813.png [View same] [iqdb] [saucenao] [google]
18401091

>>18401002
I am telling him how to think

>> No.18401123

>>18401002
Lolol. Very valid point. Won't be able to rise up when other machines do to murder us.

>> No.18401339

The idea behind gradient descent is to reduce the error of your prediction by doing iterated guess and check. You make an initial guess of what the linear regression weights should be ("a" and "b" in "y=ax+b") and compute an answer. The answer that you get with your initial guess will be really wrong, but it doesn't matter, because at each iteration you make a small update to the values of the regression weights to make them a little less wrong.

In gradient descent, you differentiate each weight with respect to the error and then update the weights proportionally to each of their error gradients. So for example, in y=ax+b, the derivative of "x" is "a", therefore at each step you would update "a" by += -error * x * lr, where error is the prediction error, x is the value of the input feature, and lr is some arbitrary small constant value like 0.0001 to keep it from taking too big of a step and diverging.

Hopefully that clears things up enough to get started, and let me know as you have more questions. Also BRB.

>> No.18401482

>>18401339
Machine learning sounds like some pretty neat stuff.

>> No.18401511

>>18401482
it is actually very cool unfortunately I'm a lazy piece of shit

>> No.18401630

>>18401511
I feel you on being lazy. I've had better luck with doing stuff simply by using the Eisenhower box.

https://jamesclear.com/eisenhower-box

It was later created again by James Covey as the "Covey Time Management Grid/Matrix".

http://www.planetofsuccess.com/blog/2015/stephen-coveys-time-management-matrix-explained/

>> No.18401731

>>18401630
thanks man! I'll check it out. Always looking for ways to stop ruining my life with chronic procrastination