In a previous post I wrote about this new (I think) set of prime numbers and how I generated them with T-SQL. Now, I want to discuss a more philosophical and practical issue. No, I am not going to split hairs about how the prime numbers were there already, like all the integers, and we are just testing them for primality. I also don’t want to discuss why 1 is or is not a prime number (I am not a mathematician). I am actually looking to discuss modularity and code reuse.
Take a look at this function:
DROP FUNCTION intIsPrime; GO CREATE FUNCTION intIsPrime ( @n INT) RETURNS BIT AS BEGIN DECLARE @return BIT; SET @return = 1; IF @n IN (1,2,3) GOTO finished; IF @n % 2 = 0 OR @n % 3 = 0 OR FLOOR(SQRT(@n)) = SQRT(@n) --I can't be the first person to note that an integer squareroot means non-prime. SET @return = 0; GOTO finished; DECLARE @k INT; SET @k = 5; --By starting at five we are starting at 6k-1 WHILE @k <= FLOOR(SQRT(@n)) BEGIN IF @n % @k = 0 OR @n % (@k + 2) = 0 --which makes this 6k+1 SET @return = 0; GOTO finished; SET @k = @k + 6 ; END finished: RETURN @return END
By encapsulating the prime test into a function we separate the operation of testing primality from that of generating integers, as well as from inserting into the table. Why is this advantageous? Consider a scenario where we have a table that we filled with prime numbers using an algorithm that we believed was the most efficient method for testing primality. Later we realized that this super efficient algorithm used by “famous” mathematicians actually only determined that number were likely prime. Uh, oh. Chances are that we were using some sort of special built prime detecting super-computer and didn’t generate nearly enough numbers to have a non-prime that was merely likely. That is still a question of chance rather than certainty.
So now if we were to modify the script from our previous post we would have to get rid of all of numbers and generate new ones. If we had implemented the algorithm for testing primality as above (a better algorithm than before which serves to highlight the advantages of modularity) as a function we can now use the improved algorithm to test those numbers already in the table. We can also use the more efficient approach of selecting the numbers where the prime test fails rather than iterating over all of them. If we wanted to put in new primes we can select from a recursive CTE those integers that pass the primality test rather than incrementing through each number.
WITH recurseMore AS ( SELECT max(p) AS p, 1 AS n FROM primes UNION ALL SELECT p + 1, n + 1 FROM recurseMore WHERE n < 101 ) SELECT p FROM recurseMore WHERE dbo.intIsPrime(p) = 1
Using the recursive CTE (which is more efficient, but that’s a topic for a different post) further serves to highlight the value of modularity. Note that we are restricting the recursive portion of the CTE to n < 101. This is because a CTE can only recurse 100 levels (+1 for the anchor row). This limits us to selecting primes that are within 100 of our max prime. We can work around this by encapsulating the CTE in a stored procedure or table valued function that accepts the max prime as a parameter. Technically it would accept an integer and generate a list of primes less than or equal to that integer + 100. By calling this in a wrapper we can test the return of that function inserting new primes, and increasing the input parameter. If we get no primes for a group of 100 (again I am not a mathematician so don’t make fun if its impossible to go a hundred integers without a prime), we can just pass the next number above it and carry on.
Pingback: And Recursion(…) | Thomas [W] Marshall
Pingback: OPTION (MAXRECURSION 32767) | Thomas [W] Marshall
5070505 and less obviously 5070503 make an excellent case for modularity of code as well as a possible counter point to the claim that all primes > 3 are of the form 6k+-1.
Actually, this is not a counter to the claim that all primes > 3 are of the form 6k+-1. It is evidence that all numbers of the 6k+-1 are not primes.