I am trying to understand why I get differing results when I enter whole numbers e.g. 50 vs 50.0000 in my python script.

See below for an example (try to get the equation of a line essentially):

The above image gives me correct results, but requires me to input lots of xx.0000000.

I would like to just be able to input the number and not add .000000 to the end.

See below for where errors arise just inputting the numerical value:

Is there some additional lines of code I can add to my python script to control floating points/decimal point precision (not sure if these are the right computer programming terms, but hopefully you get the picture)?

Thanks!