Block a programming language. This method can

Block
0 – Language description

Programming
languages are typically divided into syntax and semantics, syntax describes a languages
sentences, their structure/form. The semantics describe the meaning of the sentences
we create using the structure, for example what does the sentence mean? Is it
valid? Together they are used to define the language. Take this code example:

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

“For
example, the syntax of a Java while
statement is

while(boolean_expr)
statement

The
semantics of this statement form is that when the current value of the Boolean
expression is true, the embedded statement is executed”(Sebesta 1988a, S.
134-135).

Semantics
can also be divided into two different types, the first being static semantics
which describes a collection of rules that judges whether a program is of a
legal form, a set of restrictions to put it plainly. These rules are enforced
during compile time, such as checking type safety to predict if the program is
safe for execution example if variables of type integer are assigned to integer
values and not string values.

// Static semantics int i = 5; // correct type
int c = ‘5’; // correct (would give us 53 the Dec value in the ACII Table)
int s = “5”; // wrong type

Dynamic semantics is the second type and consists of three different methods to
describe dynamic semantics. According to professor P. Sewell (2008) they are as
follows:

Operational:
Operational semantics define the meaning of a program in terms of the
computation steps it takes in an idealised execution. It is a method that is
often used when learning a programming language. This method can be very
effective but only if it is kept simple and informal, otherwise it can grow to
complex.

Denotational:
Denotational semantics is the most abstract of the three and utilizes
mathematical structures to define the meaning of a program. This method is
quite complex and is therefore not as useful as the others when used by
language users, it is however good at concisely describing a language.

Axiomatic:
Axiomatic semantics utilize pre-& post-conditions to define the meaning of
a program. This method provides a good framework for reasoning about the
function of a program during construction but the method is limited when it
comes to describing programming languages to compiler writers.

The rules
of dynamic semantics are enforced during run time, for example making sure
arrays are not out of bounds.

By defining
both the languages syntax and semantics we can create a clearer definition for
users as to how the language functions, this is especially important since according
to R. Sebesta(1988b) one of the problems in language descriptions is the fact
that the language may be used by people across the world meaning the user base
will be very diverse. Therefore, it is important to have a clearly defined
syntax & semantics to not let the users interpret the languages use in
different ways and ensure that they can construct expressions and statements
correctly.

During the
compilation, the code that we have written is translated to machine language
that the computer can understand but before that can occur the compilator makes
sure that the code is ‘safe’ that there are no syntactically incorrect
sentences, no errors in terms of the static semantics such as checking type
safety. If we have not made sure that everything is correct the compiler will
throw back an error to warn us that something is wrong.

Example:

int j;
j++;
String s;
System.out.println(s);

Although
both are syntactically correct the compiler hands us an error because the
variables are undefined and therefore can’t be utilized in the code that is
shown, these are normal semantic errors usually made by beginners.

Once our
code is safe enough to pass the compilation it can begin to be interpreted by
the machine. During the run-time, we can use dynamic semantics and discover
issues such as index out of range:

String index = new String 3;
index3 = “3”;

The
compiler gives no complaints but when we run the code we will receive
“Exception in thread “main” java.lang.ArrayIndexOutOfBoundsException:
3″ Since an array with 3 elements range from 0 to 2.

Block
1 – Object oriented programming

Binding is
the reference between a method call and definition, there are two types of
bindings the first being static binding and the second type known as dynamic
binding. Before explaining the two different bindings methods I would like to
explain overriding & overloading. Overriding is a feature where a parent
and child class have a method with the exact same name and parameters, by
overriding the parent the child can provide a different implementation of an
inherited method.

// Overridingpublic class Human {
    public void speak() {
        System.out.println(“The human speaks”);
    }
}public class Boy extends Human {
    @Override
    public void speak() {
        System.out.println(“The boy speaks”);
    }
} // Static bound – Boy during and after compilation
Boy b = new Boy();
// Static bound – Human during and after compilation
Human h = new Human();
// Dynamic bound – Human during compilation, Boy during run-time
Human hb = new Boy();

b.speak(); // The boy speaks
hb.speak(); // The boy speaks
h.speak(); //The human speaks

Overloading
is a feature which allows a class to contain two or more methods with the same
name but different parameters.

// Overloading
yell(String);
yell(String, int);

Static
bindings or also known as early binding occurs during the compilation of a
program, example of static bindings are methods that have been declared
private, static or final because these declarations cannot be overridden, they
can however be overloaded. This type of binding has the benefits of being more
efficient, having better performance and being less complex that the dynamic
variant. Dynamic binding is a more flexible variant but sacrifices performance
and has an increased complexity. Dynamic binding occurs during run-time which
is why it’s also called late binding, it also allows subclasses to override
methods that they have inherited from their parent class.

According
to R. Sebesta (1988c) dynamic binding is the third essential characteristic of
object-oriented programming because it provides dynamic binding of messages to
method definitions. Dynamic binding allows programmers to create a form of
polymorphism in statically typed languages such as Java. Take the earlier
example all boys are humans, but not all humans are boys, when expressed in
code we can showcase how the dynamic binding of the object ‘hBoy’ gives us more
flexibility by allowing us to override the parent class method. Dynamic binding
allows developers to more easily extend software systems by reusing code both
during and after development.

Dynamic
binding is an important part of object-oriented programming it allows
developers to implement polymorphism by combining it with interfaces and
inheritance. When we have statically bound code we can decipher the meaning by
reading it, but dynamic binding is as its name suggests dynamic. It is bound
during run-time and allows our code to be flexible. Polymorphism can be defined
as an entity that can take on many forms, well to do that the entity must be
dynamic. Inheritance is the concept of inheritance of properties
(methods/variables) from a parent class. For example, if class B extends class
A then class B will inherit the properties of A. When we combine that with
dynamic binding we can specify how the inherited properties behave depending
what type of object uses them. For inheritance to be as effective as possible
it is important to properly encapsulate classes, meaning grouping the
properties together. Within object-oriented programming there is also the
concept of subtype polymorphism(subtyping), it is a form of polymorphism where
a subtype relates to a supertype. Inheritance can easily be mistaken for
subtyping or delegation and although they are all similar they are not the same
thing. Delegation is the process of creating a means of communication between
classes using an object of one class to forward messages to another. Subtyping
is a form of polymorphism where a subtype is related to a supertype. Classes
that implements interfaces must possess a method of the same name and type for
every method that exists in the interface. A Java class can implement multiple
interfaces which some refer to as multiple inheritance which is wrong, Java
does not allow multiple inheritance. Let’s give an example of subtyping: When
Class A implements Interface B we say that A is a subtype of B, this means that
class A can take on the type of B which is something that must be done during
run-time since it’s a dynamic action.

Block
2 – Macros for general programs

Macros are
a set of instructions compressed into one, they are defined similarly to
functions but with some differences. A macro does not return a value but a
form, they are processed before compilation and expand set of instructions.
Because macros are pre-processed they lack type checking and therefore their
arguments are passed without being evaluated, this could mean macros have
incompatible operands. Macros are also more difficult to debug compared to
functions because the debugger cannot step through the macro, generally when
debugging a macro, one uses a macro-expand function to unpack it. If you can
accomplish your goal with a function then you should avoid using a macro.

Higher-order
functions are functions that handle other functions. They can be classified as
a function that either returns or accepts functions as parameters, they can
also do both. Higher-order functions come in different forms, one example is
function composition and is quite common in calculus. A function composition could
look like this y = f(x) o g(x) è y = fg(x), this means that by creating a
composition of functions f(x) & g(x) we can produce a new function called
y.

Functional
programming languages are designed to mimic mathematical functions as closely
as possible, unlike imperative languages that evaluates expressions and stores
the results in a variable that represents a memory location. Functional
programming languages don’t use variables or memory cells, instead focuses on
function definition and evaluation. By using higher-order functions in a
functional language we can write code in a declarative paradigm, meaning the
code describes what we want to achieve instead of how to achieve it.

Take this
example from (Schmitz 2017), we want to write code that creates a list of
people who are equal to or above the age of 18.

Imperative
approach – In this approach our code describes how the problem is to be solved
step-by-step, loop through the list for every entry and then check their age if
the pass the criteria add them to the list, similarly to languages like Java
& C#.

const peopleAbove18 =(collection)=> {
    const results=;
    for(let i=0; i = 18){
            results.push(person);
        }
    }
    return results;
}; 

Declarative
approach – In this approach we simply declare that we want to filter out people
who do not match our criteria, this is done by using a high-order function
called ‘filter’.

const peopleAbove18 = (collection) => {
    return collection
        .filter((person) => person.age >= 18);

By using
higher-order functions we write less code that is easier to read. We can also
use macros to extend our functional languages for example the language Lisp has
predefined macros such as PUSH & POP but also allows us to define or own
macros. However, as i mentioned earlier if a problem can be solved with a
function then there is little reason to use a macro.

According
to documentation on Microsoft website written by Wagner, B (2015) functional
programming developers approach problems as a math exercise, they avoid the
different program states and focus on function application. In an imperative
language developers may solve problems utilizing the three pillars of oop, by
creating a class hierarchy and implementing classes with proper encapsulation
the developers can define the properties and behaviours of an object and then
specialises and differentiate different classes by using polymorphism.

Block
3 – Logic

Inferencing
Process

Inferencing
are logical steps of reasoning that leads to a conclusion, to put it in simple
terms it’s an educated guess. For example, assume that you are reading a book
and come across an unfamiliar word, by using other familiar words that are used
in the same context you can create meaning to the unfamiliar ones.

The
programming language Prolog has a built-in inference engine which handles the
inferencing process to derive conclusions. Prologs implementations of
inferencing is called backward chaining, R. Sebesta(1988d) explains that the
backward approach starts with a query and then proceeds to find a sequence of
matching proposition, this approach is also known as top-down resolution. This
means that Prologs inferencing process is goal driven which is useful for
problem solving, e.g. who’s a student?

To give an
example in Prolog code:

math_student(trisha).
math_student(alexis).
computer_science_student(edward).

teacher(olivier).

student(X):-
    math_student(X);
    computer_science_student(X).

In our
code, we state “X is a student if X is a math_student or
computer_science_student”

? –
student(X).                                          

We ask
Prolog to determine who’s a student and to do that it must check who fulfils
the criteria that have been specified, it does this by first going through our
list of statements, it will ask itself when is X a math_student? Well X is a
math_student when X = trisha or X = alexis. Since we have the “;” symbol which
means logical OR. Prolog will also have to determine when X is a
computer_science_student. Once this process is finished Prolog will have determine
who the students are.

What if we
ask “Is olivier a student?”:

? –
student(olivier).

Like before
Prolog starts of by checking the first conditional which is if ‘olivier’ is a
math_student, this will of course fail because she is not a math_student.
Prolog will continue and check the next conditional until they run out, Prolog
will concluded that she is not a student.

List
Processing

Lisp stands
for LISt Processing and is a programming language that handles lists. Lists in
Lisp are built on objects called cons cells. Cons cells or construct memory
objects stores a value and a pointer, the first containing the CAR (head) while
the second contains the CDR (tail).

// For example
(list 10 15)

This list
has 2 cells the first contains the value 10 and a pointer to the second cell
which contains the value 15 and a pointer to the object nil which has two
meanings, empty list or false.

These cells
are created when the function CONS is used, the function accepts two arguments
and returns a cell which stores the values. Cons cells when printed contains
the values separated by a dot called the dotted pair symbol.

//
We construct a new list that contains the values 10 and 15

(CONS 10 15) è (10 . 15)

We can
manipulate the cells by utilizing the functions CAR or CDR, the first gives us
the first element(head) while the other one gives us the rest (tail). Lisp
processes lists by looking at the first element and tries to evaluate it as
either a function name or a macro name. Then it looks at the following
atoms/lists and evaluates them as arguments to the function/macro.

// For example
(+ (- 6 2) 5) -> (+ 4 5) -> 9

if we were
to add a single-quote symbol before the parameters it would tell the interpreter
to do nothing.

// A list that contains A B C
‘(A B C)// A list that contains the function A with parameters B and C
(A B C)

Prolog handles its list processing similarly to functional languages. Prolog
list syntax looks like this:

10, 15

However,
this is a simplified version of the actual syntax, in truth it looks like this:

.(10,.(15,))

Notice the
similar use of the dot symbol between Prolog and Lisps syntax.

Prolog is
also capable of construction & deconstruction lists similarly to Lisp but
instead of using the functions CAR & CDR Prolog syntax uses a symbol called
‘bar’  to split the lists head and tail.

// For example Head|Tail1, 2, 3 = H|T.H = 1T = 2, 3

Prolog uses
recursion to search lists for information, it starts by inspecting the first
elements and repeats its process for the remaining ones. The process stops once
we find what we are searching for or if we reach the end.

With all
languages that utilises lists some basic functions and operations are
necessary, one example is the append operation. Both Prolog and Lisps have an
operation called append and although they work in similar fashion in both
languages the CONS function from Lips is a lot better performance wise and
therefore the preferred options. Prolog however does not have a CONS function
and uses the recursive predicate append/3 to combine lists.

e.g. we
want to append two lists in Prolog and use the append/3 predicate, it is defined
as:

append (, L, L).
append(H|T, L2, H|L3): –
   append (T, L2, L3).

The first
row of code states that the concatenation of an empty list to another list
gives us the same list. The second and third row states that if we concatenate
two non-empty lists we’ll end up with a list with the head of the first list and
a tail that is the combinations of both lists. If we give the following query:

?
-append(a, b, d, e, a, b, d, e).

Prolog will
respond with a ‘yes’, because the third parameter is the concatenation of the
first two. Prologs append copies all list elements and transfers them into a
new memory block which leads to an overall a slower process than Lisps CONS
function. Another minor difference is that CONS adds new elements to the start
of the list while append/3 adds it to the end.

x

Hi!
I'm Ethel!

Would you like to get a custom essay? How about receiving a customized one?

Check it out