Types Of Programming [Highly Detailed and useful for old and new programmers both]

in #geek8 years ago

There are a few ways to categorise programming languages, and I will try to convey the most useful ones I know of.

This is of course a simplified version, but I hope it'll suffice. If you disagree, please tell us all why instead of simply downvoting. (Simplifications tend to collect downvotes.)

This grew quite a lot longer than I expected, but as I've written it anyway I'll post it here. If you have any question or don't understand something, please, please, please ask it. I'd love to answer. I hope the glow I get in my eyes when I speak or write about this is evident enough.

TURING COMPLETENESS

One thing you need to know before you discuss programming languages is the concept of Turing completeness. If a language is Turing complete, it means that programs written in the language can perform the same things as a program written in any other Turing complete language.

It might surprise you that most probably, all programming languages you've ever heard of are Turing complete. That means that most probably, a program written in any language you've heard of can accomplish the same things as a program written in any other language you've heard of.

This, in a sense, means that you will never come across a situation where you can't write a particular program because of the language you're using. It might be complicated to write that program in your language, but it will never be impossible.

Now, on to how programming languages can differ.

STATIC VS. DYNAMIC LANGUAGES

Your physical computer runs on a language which pretty much just consists of numbers. Loads and loads of numbers. When you program in Java, you do not type numbers. The things you type when you program needs to somehow be converted to these numbers that the computer can churn. The language that your computer speaks, that just consists of lots and lots of numbers, that language is called machine code. Machine codes are different between different computers.

Static languages

Examples:

  • Assembly

  • C

  • C++

  • Fortran

  • Pascal

  • Objective-C

  • Haskell

  • Go

(List 1)

One simple way would be to just take all the code that the programmer has written, and translate it to numbers that mean the same thing. This step is usually called compilation. Then you can execute the program, which now is essentially just a long list of numbers. These numbers are the machine code for your particular computer. Languages which do this compilation are called static.

When you have this translation separately, you can perform a lot of analysis on the code before turning it into a runnable program. Static languages generally prefer to signal errors and add code and definitions during compilation, rather than when running the programs. (That is by the way the real technical version of what a static language is: a language that is trying to signal errors and define stuff in the code before the program is even run.)

Dynamic languages

Examples:

  • Python

  • Scheme

  • Ruby

  • Perl

  • PHP

  • JavaScript

  • Common Lisp

(List 2)

Another solution could be to have some kind of interpreter running on the computer, which reads your code directly and does the corresponding right thing on your physical machine. Now, you never translate to numbers in between, the interpreter does the right thing based directly on what code you typed. You never get a separate file for the compiled program, but you just feed your code directly to the interpreter. Languages which do this are called dynamic.

When feeding the code to an interpreter, you usually don't do much analysis on the code before running it. If there is an error in the code, it will usually not be signalled until you're trying to execute the line of code that contains the error. In dynamic languages, you can sometimes even extend the code or add definitions and such things when the program is running. (So this is the real technical version of what a dynamic language is: a language that post-pones signalling of errors until you actually try to execute the bad parts of the code, and a languge where you can alter and add definitions of functions and things while the program is running.)

Common Lisp is an interesting outlier here, because it is a dynamic language in that it postpones signalling of errors and it has an advanced macro system which allows you to change code run-time, but it is fairly common to compile it to machine code.

Languages compiled to bytecode

Examples:

  • Java

  • C#

  • Erlang

  • Scala

  • F#

  • Clojure

  • Flash ActionScript

(List 3)

There is, however, a third way. You could do a combination of the two. You could compile the code into machine code which is not meant for your machine, but for a "virtual" machine running on your machine. (Yo dawg...) This is called compiling into bytecode. Java does this, for example. When you compile Java code, it does not turn into machine code for your physical machine. It turns into bytecode for the Java Virtual Machine. Then the virtual machine acts as an interpreter for the bytecode. There is no proper name for languages of this class, you usually just say that they "compile to bytecode."

Generally, static languages have sorter execution times than languages compiled to byte code. Dynamic languages are the slowest of all. There are exceptions. Java, for one, is unexpectedly bloody quick, due to several features which have been fine tuned and amazingly optimised over time. Java beats many static languages as far as raw speed is concerned. That is by the way one of the few things with Java that impress me.

COMPARING TYPE SYSTEMS

As far as your computer is concerned, everything is just numbers. Everything can be converted to numbers, and reasoned about as numbers. Text is numbers, images are numbers, music is numbers, films are numbers. Everything is just numbers man.

Now, think about the following snippet of code (in no particular language):

a = 6
b = 1.414
return a + b

(Snippet 1)

What should be returned? If we pretend our computer can only store integers (which is as close to the truth as you'll get), how does it store b? And how does it know that it should convert both a and b to the same format before doing the addition?

This is where type systems come in. In Java, the previous code would look something like this:

int a = 6;
double b = 1.414;
return a + b;

(Snippet 2)

Now, we have included extra information. Now, Java will know that a is an integer and b is a floating point number. Java knows how to deal with the addition now, and to avoid losing precision, it will first convert a to a floating-point number and then perform the addition.

This is essentially what a type system does at its core. It provides the computer with a way of doing the right thing for the right kind of value. There are a few different approaches to type systems though. Let's first look at what kind of problems can occur.

Pretend you have the following snippet (in no particular language):

age = 43
name = "Vincent"
return age - name

(Snippet 3)

Now this doesn't make sense. At all. As a human, we quickly see that you never subtract a name from an age. It doesn't mean anything at all. This is wrong. This is an error.

The type system can catch this. If the type system knows that age is a number and name is a string, the type system can say, "Hey, you're trying to subtract a string from a number. You can't do that, fool!"

How do different kinds of type systems deal with this?

Static type systems

Examples:

  • C

  • Java

  • Haskell

  • C#

  • C++

  • Scala

  • F#

  • Objective-C

  • Go

(List 4)

As you remember, static languages were all about translating your code into machine code, or compiling your code. As a part of this compilation, you might want to look into conflicts among the types. In a language with a static type system, the compiler will say, "Hey! Stop! I can't do this!" if it encounters the code in Snippet 3. It will refuse to compile the program until you fix the error.

When you program in languages with static type systems, the compiler will refuse to compile your code until you have fixed possible type errors. Depending on how you view this, it might be either a benefit (you can't compile programs which are wrong somewhere) or a drawback (you can't compile programs which you as a human know are right, even though they look wrong to the computer.)

Dynamic type systems

Examples:

  • Python

  • Ruby

  • Common Lisp

  • Scheme

  • Clojure

  • JavaScript

  • PHP

  • Erlang

  • Prolog

(List 5)

As you might be able to guess, a dynamic type system is the counterpart to a static type system. When the type system is dynamic, it essentially means that type errors will not be caught by any kind of compiler. If you were to run Snippet 3 in a language with a dynamic type system, it will happily chug along until it hits the offending line.

As with a static type system, if this is good or not depends on how you view it. Some people like being able to run a program which contains some dubious code because they know the code is really correct, or because they know they won't execute that bit of code anyway. Other people prefer that the compiler tells them immediately where the code looks suspicious.

Strong vs. weak type systems

It is easy to think that a "weak" type system is bad, and that a "strong" type system is good. This is absolutely not the case. The strength of a type system simply indicates how minor errors it will trip up on. Technically, the following is wrong, from a type system perspective:

5 + 9.3

(Snippet 4)

It is "wrong" because you're trying to apply the addition operator to values of two different types. It is a very minor "error" so only a very, very strong type system will refuse to compile/crash at that line. A slightly weaker type system will do what is probably "the Right Thing" in that situation and produce the floating point value 14.3.

However, even a weaker type system might trip up on something like what you say in Snippet 3. There are however really, really weak type systems that would allow that thing to go through as well.

Examples of languages with very weak type systems:

  • JavaScript

  • PHP (Thanks to /u/TimMensch and /u/mkantor for pointing this out independently)

  • TCL

  • Perl

  • C

  • Assembly (Thanks to /u/masklinn for the last four)

Examples of languages with pretty strong or very strong type systems:

  • All the others.

JavaScript is notorious for its weak type system. If you haven't seen it yet, I recommond you watch The Wat Talk which makes fun of the weak typing of JavaScript. I'm not saying weak typing is bad, mind you. To each their own.

IMPERATIVE VS. DECLARATIVE PROGRAMMING

You can divide programming languages (or methodologies, really) into two very broad categories. When you write code, you can do it either imperatively or declaratively. In many languages, you can do either, but all languages I have used have leaned to one style, and trying to do the other in those languages is not always the best idea.

Imperative programming

Examples:

  • Java

  • C

  • C++

  • Assembly

  • Pascal

  • Objective-C

  • Go

  • Python

  • Ruby

  • Perl

  • PHP

  • C#

(List 6)

You will recognise imperative programming form your Java experience. When programming imperatively, you pretty much write down a series of steps the computer must take to give you the result you're looking for. Programs tend to be of the form

do this
then do this
then do this
if not this
    do this
else
    do this instead
then do this
and then this

(Snippet 5)

For imperative programming to be useful, you need variables with values which can change over time. This might seem like a strange remark to you, coming from Java, but it's not as obvious as you might think.

Imperative programming relies on having some kind of representation of the state of the program, and then being able to modify that state. This is usually accomplished with changing the value of variables. This has been the dominant style to write programs since the conception of computers, pretty much. This is the way machine code works too, so yeah, it has a pretty deep tradition in computing.

To give you a very rudimentary example (this is of course ridiculous, but I'm very bad at examples so bear with me), this would be a very imperative way to calculate a mathematical expression:

x = 3
division = f(x)
division /= df_dx(x)
x -= division

(Snippet 6)

You might or might not recognise that this is a very basic implementation of parts of the Newton–Raphson method. It contains two variables which it changes during the course of the program to achieve a result. It changes the variables through a "do this; then do this; and then this" pattern of instructions. Don't worry too much about what the lines do if you're not familiar with Newton–Raphson. It's just an example.

Declarative programming

Examples:

  • Common Lisp

  • Haskell

  • Scheme

  • Erlang

  • Scala

  • F#

  • Clojure

  • Prolog

(List 7)

Whereas imperative programming was about stating a series of steps to take, when programminging declaratively, you try to describe the values you're looking for as accurately as possible. Instead of saying "do this; then do this; and then this", you strive for something like:

doing A is the same thing as doing F(X)
doing F(A) is the same thing as doing G(A, H(3, A))
doing G(A, B) is the same thing as doing A + B
and doing H(A, B) is the same thing as doing G(A, B) + B

When programming declaratively, you try to compose descriptions of problems instead of listing the steps required to solve the problems. When programming this way, you don't need variables which can change with time. All the variables in your program can have the values they started with. For ever. Having variables which don't change comes with a lot of benefits if you're trying to write programs which can run several instances of itself simultaneously; you don't have to worry about instances trashing each others variables, because you can't change variables anyway.

The mathematical expression would declaratively look something more like:

x = 3
new_x = x - f(x) / df_dy(x);

Instead of supplying each individual step, I compose all the steps onto a single line of code, saying, "Calculating new_x is the same thing as calculating this expression." The expression will then be further divided into pieces by the compiler/interpreter, and I don't have to worry about doing that manually.

As you might realise now, mathematics are usually partly programmed declaratively, even in traditionally imperative languages.

Summary on imperative vs. declarative programming

The summary usually given is the following:

Imperative programming is about describing how to do something. Declarative programming is about describing what something is.

The two types of programming have different benefits and drawbacks, of course.

PROGRAMMING PARADIGMS

Paradigm is a word you'll hear every now and then. It essentially refers to a mindset you have when writing code and solving problems. There are really 7982349 different paradigms out there, so I was thinking I could maybe summarise the ones I feel are the most important ones to know. You will see this topic relates a lot to the previous one, and that is because imperative and declarative programming are really paradigms, but with several sub-paradigms under it. I made them a separate topic because I think they're very important.

Sequential programming

Examples:

  • Assembly

  • BASIC

(List 8)

This is the paradigm that is absolutely closest to the physical machine. Sequential programming languages are languages where the program is just a huge, huge list of instructions to perform, one after the other. If you want a loop, you have to specify that pretty much as "Start over from instruction number 37." If you do not explicitly jump to another instruction, the computer/interpreter will just continue on and execute the next instruction in the list.

Structured (or procedural) programming

Examples:

  • C

  • C++

  • Pascal

  • Go

(List 9)

This is a step up from sequential programming. Structured, or procedural, programming implies that you have access to some control structures. You don't have to jump to a particular instruction, because you can use an ifelse block. You can even loop using something like while. And you can define and call functions/methods/procedures (whatever you want to call them.)

Functional programming

Examples:

  • Common Lisp

  • Haskell

  • Scheme

  • JavaScript

  • Erlang

  • Scala

  • F#

  • Clojure

(List 10)

Now this you mentioned in your submission. Functional programming is not related to sequential or procedural programming. Functional programming is essentially the embodiment of declarative programming. You define functions in terms of other functions, and then in the end, the thing you want to do is preferably just functions applied to functions applied to functions.

Functional programming tends to allow you to do very funky stuff with functions, which makes life easier for some types of problems.

Logic programming

Examples:

  • Prolog

(List 11)

This must be one of the stranger paradigms. In logic programming, what you essentially do is state a the rules that your solution must comply to, and then the interpreter finds a solution for you. By itself. It can be really powerful when mastered and used correctly.

Object Oriented Programming

Examples:

  • C++

  • Objective-C

  • Python

  • Ruby

  • PHP

  • JavaScript

  • Scala

  • Common Lisp (thanks to /u/sepok)

(List 12)

You're probably pretty aware of this paradigm already. The essentials of it is that it's trying to solve some of the problems with procedural programming by encapsulating stuff inside independent sections called "objects." Then there are some fancy features which allow you to do some pretty neat stuff with objects to save lots of sweat.

Parallell programming

Examples:

  • Haskell

  • Erlang

  • Scala

  • Clojure

(List 13)

When making programs which should run on many processors and many computers at the same time, you run into a whole class of new problems, many of which are really difficult to solve. This is where the front of programming language research is right now. If you want to become invaluable as a programmer in a few years, learn this shit properly.

Parallell programming are all the ways you can reason about programming for multiple CPU's/machines without data corruption and other nasty problems.

(I'll try to wrap this up now.)

ABSTRACTION

When you take something hairy and complicated and wrap it in a nice package, you have just abstracted away the scary thing into something manageable.

Think about how complicated a car really is. You've got the engine, the exhaust system, the ignition, the fuel tank, the cooling, the oil, the differential, and a lot of things I can't even pronounce. Still, driving a car isn't scary. Why is that? It's because all those hairy details are abstracted away, and all the driver sees is a wheel, some pedals and a stick, pretty much.

Abstractions are all around when you're programming.

With no level of abstraction, you're writing machine code. You enter numbers manually, the very same numbers your computer speaks. If you abstract away those numbers, you could perhaps use letters instead of numbers. You get assembly.

If you abstract away the hairiness of assembly further, you might get a low-level language such as C. C is pretty comfortable to work with, but you're still exposed to some of your computers internals. You can abstract away more of those, and you arrive at something similar to Java.

If you provide abstractions over Java, you get something like C#. Provide abstractions on C#, you get Python. Provide abstractions on Python, you get Haskell.

When someone talks about a "high-level" or "low-level" language, don't listen. Put your hands on your head and hum loudly. Those are two very confusing terms. What they essentially mean is "a high level of abstraction" or "a low level of abstraction." But what is a high level of abstraction, really? Is Java at a high level of abstraction? Is Python at a high level of abstraction? As it turns out: it depends.

I would say that Python is at a higher level than Java, and by that I mean that to make Java more like Python, you will have to add layers of abstraction. On the other hand, I would say that C is at a lower level of abstraction than Java, and by that I mean that to make Java more like C, you will have to remove layers of abstraction.

To go back to the car analogy, I would say a car with automatic transmission is at a higher level of abstraction than a car with a manual gearbox. With an automatic gearbox, you're essentially letting the computer do some of the work you previously did. The computer might not do it as good as you, but to some people that tradeoff is worth it, because it means a lot to them that they don't have to shift manually.

This means that there is no such universal thing like "a high level language is better than a low level language" or "a low level language is better than a high level language." In higher level languages, development times tend to be really quick, because the programmer doesn't have to do as much manually. On the other hand, in lower level languages, execution times tend to be quicker, but the program itself also takes much longer to develop.

What you prefer is completely up to you. (Or your project leader.)

WHERE TO GO NEXT?

I know this might have been an seemingly endless rant without answering any of your questions, but if you have anything specific to ask about now, please do. I can answer almost anything relating to what I've written, and if I can't, then I'm sure someone else on here can.

I have experience with most paradigms and types of programming. I have a fairly good understanding of the computer internals and what happens at a very low level as well as how to compose stuff at a higher level. I have toyed with all the paradigms I have mentioned, and most of the languages I've mentioned.

Personally, I have trouble standing programming in Java, because I feel it is on a much lower level of abstraction than I want. I rarely want performance, so that usually isn't a matter to me and I prefer to stay at as high a level of abstraction as possible, which to me means declarative/functional programming.

Haskell amazes me with it's level of abstraction. It's unbelievable how composable stuff in that language is.

I hope you all found it informative.

Sort:  

Great writeup. I only disagree in two points really:

  • C is not weakly typed. C types are quite strong. The problem with C is that it allows casting any pointer type to any other.
  • JavaScript is not a functional language. JavaScript functions use a context. JavaScript makes heavy use of functions, but you can do that in Java 8 or C++ just as well. At its base, JavaScript is imperative.

VB.net should have been added. Depending on the field your entering as a programmer, vb.net is great if you need to learn F#. I also agree with @cyrano.witness .

I upvoted you.

Congratulations @devam13! You have received a personal award!

Happy Birthday - 1 Year
Click on the badge to view your own Board of Honor on SteemitBoard.

For more information about this award, click here

By upvoting this notification, you can help all Steemit users. Learn how here!

Congratulations @devam13! You have received a personal award!

2 Years on Steemit
Click on the badge to view your Board of Honor.

Do not miss the last post from @steemitboard:
SteemitBoard World Cup Contest - Semi Finals - Day 1


Participate in the SteemitBoard World Cup Contest!
Collect World Cup badges and win free SBD
Support the Gold Sponsors of the contest: @good-karma and @lukestokes


Do you like SteemitBoard's project? Then Vote for its witness and get one more award!

Congratulations @devam13! You received a personal award!

Happy Birthday! - You are on the Steem blockchain for 3 years!

You can view your badges on your Steem Board and compare to others on the Steem Ranking

Vote for @Steemitboard as a witness to get one more award and increased upvotes!

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.029
BTC 62581.77
ETH 2546.73
USDT 1.00
SBD 2.75