These are some small tidbits which will hopefully save you a lot of the time I and others have spent learning (and relearning) them.
The trigometric functions in most math libraries operate on radians, while many algorithms presented here are in degrees, or arcseconds. Additionally, humans generally prefer with degrees. So, it is imperative that the programmer remember to convert any degrees to radians before passing them into functions like tan(), sin(), cos(), and to remember to convert radians to degrees when interpreting the results of functions like atan2(), asin(), acos(). The conversion is quite simple \( radians=degrees \frac{\pi}{180^\circ} \), and the inverse is \( degrees=radians \frac{180^\circ}{\pi} \). This mistake is so common, it deserves the first spot on the list, and should be the first thing to consider when getting an incorrect result involving trigometric functions.
In order to avoid cases like forgetting to convert radians to degrees, or a date to a Julian Date, I find it best to make all function inputs and outputs in the same units. This usually means representing degrees as radians, and dates/times as Julian Dates. Internally, the function may be easier written using something else, but will always accept, and return, radians and Julian Dates. There are always excpetions, though they should be limited to only the most obvious of cases, such as functions whos sole purpose is to convert from one representation to another. In the situation where the degrees are used instead of radians, it will likely be easier to use the decimal versions of degrees, and also use degrees for Right Ascension, rather than hours.
Knowing the origin of these terms may help to remember that they are different. The terms "minute" and "second" originate from the Sexagesimal number system which is a "base 60" number system (as opposed to the Decimal system, which is base 10). The term "minute" originates from the latin term "pars minuta prima", which just means "first small part". Likewise, the term "second" is just a shortened version of "second small part". The Sexagesimal system continues with "thirds", "fourths", etc. And they just mean 1/60th the previous part. What they are parts of depends on the context. So, when referring to hours, a minute means 1/60th of an hour, and a second means 1/60th of a minute. Likewise, when referring to degrees, a minute refers to 1/60th of a degree, and a second divides that degree even futher.
Astronomers use the terms "arcminute" and "arcsecond" to disambiguate these two common usages, where the terms "minute" and "second" refer to time, and the "arc" versions refer to degrees. But it is still quite common in other fields for "minute" and "second" to refer to parts of a degree, and it's up to the reader to interpret the meaning from the context. This may seem archaic in today's world, but the terms were used in the context of parts of a degree well before the ability to accurately measure minutes and seconds of time. So, when the terms were coined, it was likely that minutes and seconds of time were the less frequently used context.
When seeing a Julian Date which is a whole number, it is often easy to forget that it starts at Noon UTC. And when trying to specify midnight on a certain date, it is easy to forget to subtract .5.
Most programming languages and editors today will allow you to use the greek unicode symbols as variables as they appear in the equations. Using the variables as they appear in the equations makes it much easier to verify the equation is coded correctly. Attempting to spell out the letters like "alpha" can make the equations much more difficult to read, and even cause them to span off the screen. Abreviating the letters like "a" for α can cause confusion where the same letter is used for a different purpose. Also, a character like ρ looks like the letter "p", but is spelled "rho", which may lead to confusion, especially if both "p" and "r" are used as variables.
Unfortunately, even today there are still a lot of editors, compilers, etc. which fail to interpret the unicode symbols correctly, or require special options to be set in order for them to be used. This makes it much more difficult for other people to use the code. So, when deveopoing for a wide audience, it would be preferable to use only ASCII characters. And, for this reason, most of the code here does not use the Unicode symbols. But I still often first write code using the Unicode symbols until I'm confident the equation is implemented correctly, then convert it to ASCII for publication.
In the Gregorian Celendar, the day after Thursday October 4, 1582 is Friday October 15, 1582. Although most developers of date and time libraries are aware of this, dates before Oct 15, 1582 are not handled consistently among different libraries. So, when working with distant historical dates, make sure to account for how they're handled in the given environment. Also make sure to let the user know exactly how such dates are handled.
The old addage is "A man with one watch knows what time it is. A man with two watches is never sure." A user might be confused as to why they receive different results from different applications. So it is generally better to implement and use the full accuracy algorithms, even when such accuracy isn't warranted. Only switch to reduced accuracy algorithms when there is a need to do so. That need may simply be the lack of desire to implement the full accuracy algorithm, or just to make the code smaller. But inform the user of this, and if possible, give them an idea of the accuracy they can expect. When drawing graphs, where there is a lot of computation involved, but the sub-pixel accuracy isn't warranted, it may be best to use a low accuracy algorithm for the graph, but if the data is displayed as numerical values to the user at some other point, use the full accuracy algorithm.
Leap seconds are added to keep solar time in sync with the Earth's slowing rotation rate. There is no algorithm to predict them far in advance, and instead are announced by the International Earth Rotation Service about six months in advance. The IERS publishes a list of all leap seconds, and programs are required to download an update the list automatically if they wish to maintain that level of accuracy.
It's also important to note that leap seconds do not exist in the UTC time system. So, for programs that only accept UTC as input, there is no way to specify the time a leap second occurred. Some systems account for this by declaring the minute with a leap second has 61 seconds, but this is not standard and may cause issues on many systems. The better solution is to allow use of a time system which does not contain leap seconds, like Atomic Time, or Terestrial Time.
The most common way to specify a longitude today is that longitudes West of the Prime Merdian are negative, and those to the East are positive. However, it was common for the opposite to be true. It is also still common today to not have negative longitudes, instead 90° W would be represented as 270°. The most propper way is to specify either E or W rather than positive or negative numbers.
Different languages and libraries handle the conversion of negative floating point numbers to whole numbers differently. Most languages will have a floor() or int() function. When given a negative number, such as -1.5, some implemenations return -2, others -1. Some languages supply a truncate() function to disambiguate this behaviour, which would return -1.
An actual date in the Julian Calendar looks a lot like a date in the Gregorian calendar, e.g. Oct 15, 1400. Where a Julian Date will be just a number, e.g. 2451545.0. Outside of astronomical applications, it is common to see "Julian Date" refer to a date in the Julian calendar, but in the context of astronomy, the distinction is usually followed. But it is usually quite obvious from the context as to which meaning is intended.
In older texts, or even recent texts like Meeus' Astronomical Algorithms, 0° is considered due South. While most modern texts consider 0° due North. As a result, algorithms from those texts may appear slightly different, for example a negative sign, or the use of cos() rather than sin().
It is quite common, even often recommended, to implement a polynomial such as \(y=A + Bx + Cx^2\) in the form y = A + x * (B + x*C). It should be quite obvious that it is very difficult to verify the equation is implemented correctly, especially if more terms are involved. Even though most languages include a pow() function, I find it best to implement it as y = A + B*x + C*x*x. It does involve more computational overhead, but will likely make up for it in debug time.
However, the popular astronomical library Standards of Fundamental Astronomy (SOFA) does use the exact method mentioned above. Implementing it differently may cause some usually insignificant floating point rounding. But, if the goal is to implement the standard, then you should use the same method used in SOFA. So you will see this in some of my code, but I recommend avoiding it wherever possible.
There are a multitude of RA/Dec systems in common use. For example, True Equator and Equinox of Date, Mean Equator and Equinox of Date, J2000, ICRS, etc. Additionally the positions my be geocentric, topocentric, observed, apparent, etc. Coordinates are often presented without any indication which system they are from. And, like giving someone a time without specifying a time zone, assumptions are often made which usually work out, but fail spectacularly when they do fail. So, make every attempt to determine which system is in use. But the key takeaway here, is to make sure programs allow users to determine exactly which systems are being used.
Greenwich Mean Time is no longer in popular use, it was replaced by Universal Time Coordinated in 1972. However, the GMT name lives on, and is almost always inteded as UTC. Even some time zones are named GMT, but actually use UTC. However, dates and times before 1972 might actually be GMT dates and times, so it's still imporant to be specific.
The information does not have to be prominent, but should be available somewhere for the users that want it. For example, you should let users know if IAU 2000A or 2000B nutation is used. This will help avoid confusion if they receive differing results from other sources.
A day is 24 hours long, and there's 24 hours of Right Asension, so obviously they're the same thing, right? No! UTC is based on mean solar time, while the positions of Right Ascension are based on distant (apparently non-moving) objects. So, the Earth rotates through all 24 hours of RA in 23 hours, 56 minutes and 4.0905 seconds. So, a UTC second is about 1.0027 seconds of RA. That's just small enough to be negligible over short time periods, but adds up over a day.
A 64-bit double is accurate to about 15 digits, since modern Julian Dates are 7 digits, that leaves 8 digits for the fraction of a day, which is a bit less than one millisecond. And that will hold true for contemporary dates until another digit is used for the Julian Date, which will be about 20,000 years from now. Since most algorithms work on Julian Centuries (or Julian Mellinia) since J2000, they can offer more precision. They generally do this by allowing the user to specify the Julian Date as two 64-bit floats, the first being J2000, and the second being the Julian Date minus J2000. Then computing the Julian Centuries using an algorithm similar to t=((jd1 - J2000) + jd2) / 36525, which is precise to about a femtosecond.
The greek letters are often used in many equations. When implementing an algorithm take the time to learn the names of the characters used. Otherwise you end up making up your own names like "little squiggly", "squiggly like an E", and can lead to errors. You don't have to memorize them all, just the ones you're currently working with.