home - Mobile devices
The concept of an algorithm. Programming

). Development of a program using such a programming system consists of two stages: 1) creation of graphical interface elements of the program in a visual mode; 2) development of program code. This approach greatly simplifies the creation of programs, since developing a graphical interface manually (in procedural languages) is a complex and time-consuming process.

The first step to understanding the importance of studying and knowing algorithms is to accurately define what is meant by an algorithm. An algorithm in programming is clear and precise sequence of actions written in a programming language. According to the popular book Algorithms: Construction and Analysis (Cormen, Leiserson, Rivest, Stein), “an algorithm is any well-defined computational procedure whose input is given a certain quantity or set of quantities, and the result of which is an output value or set of values." In other words, algorithms are like road maps to achieve a clearly defined goal. The code for calculating the terms of the Fibonacci sequence is an implementation of a specific algorithm. Even a simple function of adding two numbers is an algorithm, albeit a simple one.

To create an algorithm (program), you need to know:

    a complete set of initial task data (initial state of the object);

    the purpose of creating the algorithm (the final state of the object);

    the system of commands of the performer (that is, a set of commands that the performer understands and can execute).

The resulting algorithm (program) must have the following set of properties:

    discreteness(the algorithm is divided into separate steps - commands);

    unambiguity(each command determines the only possible action of the performer);

    clarity(all algorithm commands are included in the system of executor commands);

    effectiveness(the performer must solve the problem in a finite number of steps).

Most algorithms also have the property mass character(using the same algorithm you can solve many similar problems).

Some algorithms, such as those for calculating the Fibonacci sequence, are intuitive and relate to innate logical thinking and problem-solving skills. However, most of us would do well to learn complex algorithms so that in the future we can use them as building blocks to solve logic problems more efficiently. In fact, you might be surprised how many complex algorithms people use when checking email or listening to music. This article introduces some basic ideas of algorithm analysis, with practical examples illustrating the importance of studying algorithms.

A programming language is a set of rules for writing algorithmic structures and data.


Algorithm Execution Time Analysis

One of the most important aspects of the algorithm is its speed. It is often easy to come up with an algorithm that solves a problem, but if the algorithm is too slow, then it is returned for revision. Because the exact speed of an algorithm depends on where the algorithm runs, as well as implementation details, computer scientists usually talk about execution time relative to the input data. For example, if the input consists of N integers, then the algorithm may have a running time proportional to N 2 , which is represented as O(N 2 ). This means that if you run the implementation of the algorithm on a computer with an input of size N, it will take C*N 2 seconds, where C is some constant that does not change as the size of the input changes.

However, the execution time of many complex algorithms depends not only on the size of the input data, but also on many other factors. For example, an algorithm for sorting a set of integers can run much faster if the set is already sorted. It is customary to talk about the worst case of execution and the average case of execution. The worst-case execution time is the maximum time an algorithm can run given the “worst” of all possible inputs. The average execution case is the average running time of the algorithm on all possible inputs. Of the two types of execution time, the worst case is the easiest to reason about and is therefore used more often as a benchmark for a given algorithm. The process of determining the worst-case and average-case execution time of an algorithm can be quite complex because It is usually not possible to run the algorithm for all possible inputs.

It was noted above that the same algorithm can be written in different ways. You can write down the algorithm natural language. This is how we use recipes, instructions, etc. To record algorithms intended for formal performers, special programming languages. Any algorithm can be described graphically in the form of a block diagram. A special notation system has been developed for this purpose:

Designation

Description

Notes

Start and end of the algorithm

Data input and output.

Data output is sometimes referred to differently:

Action

In computing algorithms this is used to denote assignment

Fork

Fork - a component necessary to implement branches and loops

Starting a loop with a parameter

Typical process

In programming - procedures or subroutines

Transitions between blocks

Let us give an example of a description of the algorithm for summing two quantities in the form of a block diagram:

This way of describing the algorithm is the most visual and understandable to humans. Therefore, formal executor algorithms are usually developed first in the form of a flowchart, and only then a program is created on one of them.

Sorting

Sorting is a good example of an algorithm that is often used by programmers. The easiest way to sort a group of elements is to start by removing the smallest element from the group and putting it first. Then the second largest element is removed and placed second, and so on. Unfortunately, the running time of this algorithm is O(N 2), which means that it will take an amount of time proportional to the number of elements squared. If we had to sort billions of elements, then this algorithm would require 10 18 operations. If we assume that typical desktop PCs perform approximately 10 9 operations per second, then it will take years to finish sorting these billion elements.

Fortunately, there are a number of more advanced algorithms, such as quicksort, heapsort, and mergesort. These algorithms have a running time of O(N * Log(N)). Thus, the number of operations required to sort billions of elements is reduced to such reasonable limits that even the cheapest desktop PC can perform such sorting. Instead of a billion squared operations (10 18), these algorithms require only 10 billion operations (10 10), i.e. 100 million times faster.

Shortest way

Algorithms for finding the shortest path from one point to another have been studied for many years. There are plenty of examples of applied applications of these algorithms, but for simplicity of presentation we will stick to the following statement: we need to find the shortest path from point A to point B in a city with several streets and intersections. There are many different algorithms for solving this problem and they all have their own advantages and disadvantages. Before we dive into them, let's look at the execution time of a simple brute-force algorithm. If an algorithm considers every possible path from A to B (which does not form cycles) it is unlikely to end in our lifetime, even if A and B are in a small town. The running time of this algorithm is exponential, which is denoted as O(C N) for some C. Even for small values ​​of C, C N becomes an astronomical number when N becomes moderately large.

One of the fastest algorithms for solving this problem has a running time of O(E+V*Log(V)), where E is the number of road segments and V is the number of intersections. The algorithm will take about 2 seconds to find the shortest path in a city of 10,000 intersections and 20,000 road segments (usually about 2 road segments per intersection). This algorithm is known as Dijkstra's algorithm, it is quite complex and requires the use of a priority queue data structure. However, in some cases, even this execution time is too slow (take for example finding the shortest path from New York to San Francisco - there are millions of intersections in the USA), in such cases programmers try to improve the execution time using so-called heuristics. A heuristic is an approximation of something that is relevant to a task. In a shortest path problem, for example, it may be useful to know how far a point is from a destination. Knowing this, you can develop a faster algorithm (for example, the A* search algorithm in some cases works much faster than Dijkstra’s algorithm). This approach does not always improve the worst-case execution time of the algorithm, but in most real-world applications the algorithm will run faster.

Approximate Algorithms

Sometimes even the most advanced algorithm with the most advanced heuristics is too slow on the fastest computer. In such cases, it is necessary to reduce the accuracy of the final result. Instead of trying to get the shortest path, you can limit yourself to a path that is, for example, 10% larger than the shortest path.

In fact, there are many important problems for which the currently known algorithms produce the optimal result too slowly. The most famous group of these problems is called NP (non-deterministic polynomial). If a problem is called NP-complete or NP-hard, it means that no one knows a good enough way to obtain the optimal solution. Also, if someone develops an efficient algorithm for solving one NP-hard problem, then that algorithm can be applied to all NP-hard problems.

A good example of an NP-hard problem is the traveling salesman problem. A seller wants to visit N cities, and he knows how long it takes to travel from one city to another. The question is how quickly can he visit all the cities? The fastest known algorithm for solving this problem is too slow - and many believe it will always be - so programmers look for algorithms that are fast enough to give a good solution, but often not an optimal one.

Random Algorithms

Another approach used to solve some problems is to make the algorithm random. This approach does not improve the algorithm's worst-case time, but quite often works well in the average case. The quicksort algorithm is a good example of the use of randomization. In the worst case, the quicksort algorithm sorts a group of elements in O(N 2), where N is the number of elements. If randomization is used in this algorithm, then the chances of a worst case become negligibly small, and on average the quicksort algorithm runs in O(N*Log(N) time). Other algorithms guarantee O(N*Log(N)) running time even in the worst case, but they are slower in the average case. Although both algorithms have a running time proportional to N*Log(N), the quicksort algorithm has a smaller constant factor - i.e. it requires C*N*Log(N), while other algorithms require more than 2*C*N*Log(N) operations.

Another algorithm using random numbers searches for the median of a group of numbers and its average running time is O(N). This is much faster compared to the algorithm that sorts the numbers and takes the average, and runs in O(N*Log(N)). There are deterministic algorithms (not random) that can find the median in O(N) time, but the random algorithm is easier to understand and often faster than these deterministic algorithms.

The main idea of ​​the median search algorithm is to select a random number among the numbers and count how many numbers in the group are less than the selected number. Let's say there are N numbers, K of them are less than or equal to the selected number. If K is less than half N, then we know that the median is the (N/2-K)th number that is greater than the random number, so we discard K numbers less than or equal to the random number. Now let's say we want to find the (N/2-K)th smallest number, instead of the median. The algorithm is the same, we just randomly select a number and repeat the steps described.

Compression

Another class of algorithms is designed for data compression. This algorithm does not have an expected result (like a sorting algorithm), but instead optimizes according to some criteria. In the case of data compression, an algorithm (for example, LZW) tries to make the data occupy as few bytes as possible, but at the same time, so that it can be decompressed to its original form. In some cases, this type of algorithm uses the same methods as other algorithms, resulting in a good result, but not optimal. For example, JPG and MP3 compress data in such a way that the final result is lower quality than the original, but also smaller in size. MP3 compression does not preserve every feature of the original audio file, but it attempts to preserve enough detail to provide acceptable quality while significantly reducing file size. The JPG format follows the same principle, but the details are significantly different because... the goal is to compress the image, not the audio.

Why do you need to know all sorts of algorithms?

To use algorithms properly, it is important to know all the mentioned types of algorithms. If you have to develop an important piece of software, then you should be able to estimate the speed of your algorithm. The accuracy of your estimate depends on how proficient you are in analyzing the execution time of algorithms. In addition, it is necessary to know the details of the algorithms, which will allow us to predict special cases in which the program will not work quickly or will produce unacceptable results.

Of course, there will be times when you stumble upon previously unexplored problems. In such cases, you need to come up with a new algorithm, or apply the old algorithm in a new way. The more you know about algorithms, the more likely you are to find a good solution to a problem. In many cases, a new problem can easily be reduced to an old one, but to do this you need to have a fundamental understanding of the old problems.

As an example, consider how network switches work. The switch has N cables connected to it, and receives data packets coming through these cables. The switch must first parse the packets and then send them back over the correct cable. A switch, like a computer, operates in discrete mode - packets are sent at discrete intervals, rather than continuously. A fast switch strives to send as many packets as possible during each interval, otherwise they will accumulate and the switch will crash. The goal of the algorithm is to send the maximum number of packets during each interval, and also to ensure an order in which packets that arrive earlier than others are also sent earlier than others. In this case, it turns out that an algorithm known as “stable matching” is suitable for solving this problem, although at first glance this may not be obvious. Such connections between problem and solution can only be discovered using existing algorithmic knowledge.

Real examples

There are plenty of examples of solutions to real problems that require the latest algorithms. Almost everything you do on a computer depends on algorithms that someone spent a long time developing. Even the simplest programs would not exist without the algorithms that work behind the scenes to manage memory and load data from the hard drive.

There are dozens of examples of the use of complex algorithms, but we will discuss two problems whose solution requires the same skills as solving some problems on TopCoder. The first problem is known as the maximum flow problem, and the second involves dynamic programming, a technique that can often solve problems at seemingly impossible lightning speeds.

Algorithm for finding the maximum flow

The problem of finding the maximum flow is to use the existing network to best move something from one place to another. This particular problem first arose in the 1950s in connection with the railway lines of the Soviet Union. The United States wanted to know how quickly the Soviet Union could transport materials to its satellite states in Eastern Europe through its railroad network.

In addition, the United States wanted to know which part of the rails could most easily be destroyed in order to cut off the satellite states from the rest of the Soviet Union. It turned out that the two problems were closely related and that solving the maximum flow problem also solved the minimum cut problem, which would ultimately reveal the cheapest way to cut off the Soviet Union from its satellites.

The first effective algorithm for finding maximum flow was invented by scientists Ford and Fulkerson. The algorithm was subsequently named the Ford-Fulkerson algorithm and is one of the most famous algorithms in the field of computer science. Over the past 50 years, the algorithm has undergone a number of improvements, making it faster (although some of these improvements are daunting in their complexity).

Since the task was clearly defined, many different applications were discovered. The algorithm is directly connected to the Internet, where it is important to transport as much data as possible from one point to another. The problem also occurs in a variety of business processes and is an important part of operations research. For example, if there are N employees and N tasks that need to be done, but not every employee can handle every task, then finding the maximum flow will yield a solution to assign N employees to tasks such that each task is completed provided that Maybe. The Graduation problem from TopCoder SRM 200 is a good example of a maximum flow problem.

Sequence comparison

Many coders have never had to implement an algorithm using dynamic programming in their entire career. However, dynamic programming is used in a number of important algorithms. One of the algorithms is finding the differences between two sequences, which most programmers have probably used, although they may not have understood it. This algorithm calculates the minimum number of insertions, deletions, and edits required to convert sequence A to sequence B.

For example, consider the sequences "AABAA" and "AAAB". To convert the first sequence to the second, the simplest thing to do is remove the B in the middle and change the last A to B. This algorithm has many applications, including some problems related to DNA and plagiarism detection. However, many programmers use it primarily to compare versions of the same source code file. If the elements of the sequence are lines in a file, then this algorithm allows you to find out which lines need to be deleted, inserted, changed in order to convert one version of the file to another.

Without dynamic programming, you have to go through an exponential number of transformations to get from one sequence to another. However, dynamic programming reduces the execution time of the algorithm to O(N*M), where N and M are the number of elements in two sequences.

Conclusion

As many different problems exist, there are as many different algorithms for solving them. However, there is a good chance that the problem you are trying to solve is in some way similar to another problem. By developing a deep understanding of a wide range of algorithms, you will be able to select the right algorithm and apply it to solve a problem. In addition, solving problems in competitions on TopCoder will help you hone your skills in this area. Many of these problems seem artificial and unrealistic, but they require the same set of algorithmic knowledge that is required every day in the real world.

To write applications of varying levels of complexity, you first need to gain knowledge of how to do it. And it is advisable to start with the very basics of algorithmization and programming. We will talk about them in the article.

This is the name of a complex technical science, the task of which is to systematize the methods of creating, processing, transmitting, storing and reproducing data using It also includes operating principles and management methods that help achieve the goal. The term “computer science” itself is of French origin and is a hybrid of the words “information” and “automation”. It arose thanks to the development and dissemination of new technologies for collecting, processing and transmitting data, which were associated with their recording on computer media. This is the origin of computer science. The basics of algorithmization and programming are one of the most important areas of this science.

What does she do?

Computer science faces the following challenges:

  1. Hardware and software support for computer technology.
  2. Means for ensuring interaction between humans and computer components.

The term “interface” is often used to refer to the technical part. Here we have a free program. The basics of algorithmization and programming are always used when creating products for mass distribution that “should” win a wide audience. After all, to be popular, the application being developed must work and look optimal.

Presentation of Algorithms

They can be written in a significant number of ways. The most popular are the following:

  1. Verbal and formulaic description. This implies the placement of text and specific formulas that will explain the features of interaction in all individual cases.
  2. Block diagram. This implies the presence of graphic symbols that make it possible to understand the features of the program’s interaction within itself and with other applications or the computer’s hardware. Each of them can be responsible for a separate function, procedure or formula.
  3. This implies the creation of separate methods of description for specific cases, which show the features and order of tasks.
  4. Operator diagrams. This involves creating a prototype - it will show the interaction based on the paths that individual operands will take.

Pseudocode. Sketch of the skeleton of the program.

Recording the algorithm

How to start creating your own prototype of a program, function or procedure? To do this, it is enough to use these general recommendations:

  1. Each algorithm must have its own name, which explains its meaning.
  2. Be sure to make sure there is a beginning and an end.
  3. Input and output data must be described.
  4. You should specify the commands that will be used to perform certain actions on specific information.

Recording methods

There can be as many as five representations of the algorithm. But there are only two ways to write:

  1. Formal-verbal. It is characterized by the fact that the description is made mainly using formulas and words. The content, as well as the sequence of execution of the stages of the algorithm, in this case is written in natural professional language in free form.
  2. Graphic. The most common. It uses block symbols or algorithm diagrams. The connection between them is shown using special lines.

We develop a program structure

There are three main types:

  1. Linear. With this structure, all actions are performed sequentially in order and only once. The diagram looks like a sequence of blocks, arranged from top to bottom, depending on the order in which they are executed. The resulting primary and intermediate data cannot influence the direction of the computational process.
  2. Branching. It has found wide application in practice when solving complex problems. Thus, if it is necessary to take into account initial conditions or intermediate results, then the necessary calculations are performed in accordance with them and the direction of the computational process can change depending on the result obtained.

Cyclical. To make it easier for yourself to work with many tasks, it makes sense to repeat some sections of the program code many times. In order not to prescribe how many times and what needs to be done, a cyclic structure is used. It provides for a sequence of commands that will be repeated until a given condition is met. The use of loops allows you to greatly reduce the complexity of writing a program.

Programming

It is important on which programs will be created. It should be noted that many of them are “tailored” to specific operating conditions (for example, in a browser). In general, programming languages ​​are divided into two groups:

  1. Functional.
  2. Operator rooms:

Not procedural;

Procedural.

Can you guess which ones are most often used? Operator-procedural - that's the answer. They can be machine-centric or independent. The first include assemblers, autocodes, and symbolic coding. Independents divide based on their orientation:

  • procedural;
  • problematic;
  • object.

Each of them has its own scope of application. But to write programs (useful applications or games), object-oriented languages ​​are most often used. Of course, you can use others, but the fact is that they are the most developed for creating final consumer products for the masses. Yes, and if you don’t yet have an exact vision of where to start, I suggest you pay attention to the basics of algorithmization and object-oriented programming. Now this is a very popular direction where you can find a lot of educational material. In general, the basics of algorithmization and programming languages ​​are now needed due to the fact that there is a shortage of qualified developers, and their importance will only grow in the future.

Conclusion

When working with algorithms (and subsequently with programs), you should strive to think through all the details down to the smallest. Subsequently, identifying each undeveloped section of code will only lead to additional work, increasing development costs and task completion times. Careful planning and elaboration of all the nuances will significantly save time, effort and money. Well, now they can say that after reading this article you have an understanding of the basics of algorithmization and programming. All that remains is to apply this knowledge. If you want to study the topic in more detail, I can recommend the book “Fundamentals of Algorithmization and Programming” (Semakin, Shestakov) 2012.

Any process management requires certain rules and clear actions. A computer is a device designed to automate the creation, storage, processing and transmission of data, which means that clear instructions must be followed to perform a particular task.

To create programs designed to solve a problem on a computer, it is necessary to draw up an algorithm for solving it.

Algorithms, for example, are the rules of addition, multiplication, solving algebraic equations, matrix multiplication, etc. The word algorithm comes from algoritmi, which is a Latin transliteration of the Arabic name of the 9th century Khwarezmian mathematician al-Khwarizmi. Thanks to the Latin translation of al-Khwarizmi's treatise, Europeans in the 12th century became acquainted with the positional number system, and in medieval Europe the algorithm was called the decimal positional number system and the rules of counting in it.

In other words, an algorithm is an exact instruction, and instructions are found in almost all areas of human activity. Algorithms for conducting a physical experiment, assembling a cabinet or TV, or processing a part are possible. However, not every instruction is an algorithm.

An instruction becomes an algorithm only when it satisfies certain requirements. One of the requirements of the algorithm is unambiguity, i.e. if when applied to the same data it will give the same result.

In relation to a computer, the algorithm allows you to formalize the computational process, which begins with processing a certain set of possible initial data and is aimed at obtaining results determined by these initial data. The term computing process also extends to the processing of other types of information, for example, symbolic, graphic or audio.

If the computational process ends with obtaining results, then the algorithm is said to be applicable to the considered set of initial data. Otherwise, they say that the algorithm is not applicable to the set of initial data. Any applicable algorithm has the following basic properties:

· discreteness;

· certainty;

· effectiveness;

· mass character.

Discreteness – sequential execution of simple or previously defined (subroutine) steps. The transformation of source data into results is carried out discretely in time.

Certainty consists in the coincidence of the results obtained, regardless of the user and the technical means used (unambiguous interpretation of instructions).

Efficiency means the possibility of obtaining a result after performing a finite number of operations.

Mass character lies in the possibility of applying the algorithm to a whole class of similar problems that differ in specific values ​​of the initial data (development in general form).

To specify an algorithm, it is necessary to describe its following elements:

· a set of objects that make up the totality of possible initial data, intermediate and final results;

· start rule;

· the rule of direct processing of information (description of the sequence of actions);

· ending rule;

· rule for extracting results.

The algorithm is always designed for a specific performer. In our case, such a performer is a computer. To ensure the possibility of implementation on a computer, the algorithm must be described in a language understandable to the computer, that is, in a programming language.

The concepts of algorithm and program are not very clearly delineated. Typically, a program is the final version of an algorithm for solving a problem, oriented towards a specific user.

Thus, we can give the following definition of a computer program:

The main ways to describe algorithms include the following:

· verbal-formular (in natural language);

· structural or block diagram;

· using special algorithmic languages;

· using graph diagrams (a graph is a collection of points and lines in which each line connects two points. The points are called vertices, the lines are called edges).

Before compiling programs, they most often create an algorithm for solving a given problem using one of the methods described above.

At verbally-formulaic way the algorithm is written in the form of text with formulas for the points that make up the sequence of actions.

Let, for example, you need to find the value of the following expression:

y = 4a – (x + 3).

In a verbal and formulaic way, the algorithm for solving this problem can be written in the following form:

1. Enter the values ​​of a and x.

2. Add x and 3.

3. Multiply a by 4.

4. Subtract the sum (x+3) from 4a.

5. Output y as the result of evaluating the expression.

At block-schematic description the algorithm is depicted by geometric figures (blocks) connected by control lines (flow directions) with arrows. The blocks record a sequence of actions.

This type of recording of the algorithm has the greatest advantages. It is the most visual: each operation of the computational process is depicted as a separate geometric figure. In addition, a graphical representation of the algorithm clearly shows the ramifications of ways to solve the problem depending on various conditions, the repetition of individual stages of the computational process and other details.

The design of programs must meet certain requirements (Fig. 2.). Currently, there is a unified system of program documentation (USPD), which establishes the rules for the development, execution of programs and program documentation. The ESPD also defines the rules for the design of flowcharts of algorithms (GOST 10.002-80 ESPD, GOST 10.003-80 ESPD).

One of the properties of the algorithm is discreteness, i.e. presentation of the calculation process into individual steps and the allocation of individual sections of the program to certain structures.

Any computing process can be represented as a combination of elementary algorithmic structures:

· Following. Assumes sequential execution of commands from top to bottom. If an algorithm consists only of sequence structures, then it is linear.

· Branching. The program execution proceeds along one of two, several or many branches. The choice of branch depends on the condition at the branch input and the data received here.

· Cycle. Assumes the possibility of repeated repetition of certain actions. The number of repetitions depends on the cycle condition.

· Function (subroutine). Commands separated from the main program are executed only if they are called from the main program (from anywhere in it). The same function can be called from the main program as many times as desired.

There are three main types of algorithms:

Modern programming systems usually provide users with powerful and convenient program development tools. These include:

· compiler or interpreter;

· integrated development environment;

· tools for creating and editing program texts;

· extensive libraries of standard programs and functions;

· debugging programs, i.e. programs that help find and fix errors in the program;

· user-friendly dialogue environment;

· multi-window operating mode;

· powerful graphic libraries; utilities for working with libraries;

· built-in assembler;

· built-in help desk;

· other specific features.

The first step to understanding the importance of studying and knowing algorithms is to accurately define what is meant by an algorithm. According to the popular book Algorithms: Construction and Analysis (Cormen, Leiserson, Rivest, Stein) “an algorithm is any well-defined computational procedure whose input is given a certain quantity or set of quantities, and the result of which is an output ( output) value or set of values." In other words, algorithms are like road maps for achieving a clearly defined goal. A piece of code for calculating members of the Fibonacci sequence is an implementation of a specific algorithm. Even a simple function of adding two numbers is an algorithm, albeit a simple one.

Some algorithms, such as those for calculating the Fibonacci sequence, are intuitive and relate to innate logical thinking and problem-solving skills. However, most of us would do well to learn complex algorithms so that in the future we can use them as building blocks to solve logic problems more efficiently. In fact, you might be surprised how many complex algorithms people use when checking email or listening to music. This article introduces some basic ideas of algorithm analysis, with practical examples illustrating the importance of studying algorithms.

Algorithm Execution Time Analysis

One of the most important aspects of the algorithm is its speed. It is often easy to come up with an algorithm that solves a problem, but if the algorithm is too slow, then it is returned for revision. Because the exact speed of an algorithm depends on where the algorithm runs, as well as implementation details, computer scientists usually talk about execution time relative to the input data. For example, if the input consists of N integers, then the algorithm may have a running time proportional to N 2 , which is represented as O(N 2 ). This means that if you run the implementation of the algorithm on a computer with an input of size N, it will take C*N 2 seconds, where C is some constant that does not change as the size of the input changes.

However, the execution time of many complex algorithms depends not only on the size of the input data, but also on many other factors. For example, an algorithm for sorting a set of integers can run much faster if the set is already sorted. It is customary to talk about the worst case of execution and the average case of execution. The worst-case execution time is the maximum running time of the algorithm with the “worst” of all possible inputs. The average execution case is the average running time of the algorithm on all possible inputs. Of these two types of execution time, the worst case is easiest to reason about and is therefore often used as a benchmark for a given algorithm. The process of determining the worst-case and average-case execution time of an algorithm can be quite complex because It is usually not possible to run the algorithm for all possible inputs.

Sorting

Sorting is a good example of an algorithm that is often used by programmers. The easiest way to sort a group of elements is to start by removing the smallest element from the group and putting it first. Then the second largest element is removed and placed second, and so on. Unfortunately, the running time of this algorithm is O(N 2), which means that it will take an amount of time proportional to the number of elements squared. If we had to sort billions of elements, then this algorithm would require 10 18 operations. If we assume that typical desktop PCs perform approximately 10 9 operations per second, then it will take years to finish sorting these billion elements.

Fortunately, there are a number of more advanced algorithms, such as quicksort, heapsort, and mergesort. These algorithms have a running time of O(N * Log(N)). Thus, the number of operations required to sort billions of elements is reduced to such reasonable limits that even the cheapest desktop PC can perform such sorting. Instead of a billion squared operations (10 18), these algorithms require only 10 billion operations (10 10), i.e. 100 million faster.

Shortest way

Algorithms for finding the shortest path from one point to another have been studied for many years. There are plenty of examples of applied applications of these algorithms, but for simplicity of presentation we will stick to the following statement: we need to find the shortest path from point A to point B in a city with several streets and intersections. There are many different algorithms for solving this problem and they all have their own advantages and disadvantages. Before we dive into them, let's look at the execution time of a simple brute-force algorithm. If an algorithm considers every possible path from A to B (which does not form cycles) it is unlikely to end in our lifetime, even if A and B are in a small town. The running time of this algorithm is exponential, which is denoted as O(C N) for some C. Even for small values ​​of C, C N becomes an astronomical number when N becomes moderately large.

One of the fastest algorithms for solving this problem has a running time of O(E*V*Log(V)), where E is the number of road segments and V is the number of intersections. The algorithm will take about 2 seconds to find the shortest path in a city of 10,000 intersections and 20,000 road segments (usually about 2 road segments per intersection). This algorithm is known as Dijkstra's algorithm, it is quite complex and requires the use of a priority queue data structure. However, in some cases, even this execution time is too slow (take for example finding the shortest path from New York to San Francisco - there are millions of intersections in the USA), in such cases programmers try to improve the execution time using so-called heuristics. A heuristic is an approximation of something that is relevant to a task. In a shortest path problem, for example, it may be useful to know how far a point is from a destination. Knowing this, you can develop a faster algorithm (for example, the A* search algorithm in some cases works much faster than Dijkstra’s algorithm). This approach does not always improve the worst-case execution time of the algorithm, but in most real-world applications the algorithm will run faster.

Approximate Algorithms

Sometimes even the most advanced algorithm with the most advanced heuristics is too slow on the fastest computer. In such cases, it is necessary to reduce the accuracy of the final result. Instead of trying to get the shortest path, you can limit yourself to a path that is, for example, 10% larger than the shortest path.

In fact, there are many important problems for which the currently known algorithms produce the optimal result too slowly. The most famous group of these problems is called NP (non-deterministic polynomial). If a problem is called NP-complete or NP-hard, it means that no one knows a good enough way to obtain the optimal solution. Also, if someone develops an efficient algorithm for solving one NP-hard problem, then that algorithm can be applied to all NP-hard problems.

A good example of an NP-hard problem is the traveling salesman problem. A seller wants to visit N cities, and he knows how long it takes to travel from one city to another. The question is how quickly can he visit all the cities? The fastest known algorithm for solving this problem is too slow - and many believe it will always be - so programmers look for algorithms that are fast enough to give a good solution, but often not an optimal one.

Random Algorithms

Another approach used to solve some problems is to make the algorithm random. This approach does not improve the algorithm's worst-case time, but quite often works well in the average case. The quicksort algorithm is a good example of the use of randomization. In the worst case, the quicksort algorithm sorts a group of elements in O(N 2), where N is the number of elements. If randomization is used in this algorithm, then the chances of a worst case become negligibly small, and on average the quicksort algorithm runs in O(N*Log(N) time). Other algorithms guarantee O(N*Log(N)) running time even in the worst case, but they are slower in the average case. Although both algorithms have a running time proportional to N*Log(N), the quicksort algorithm has a smaller constant factor - i.e. it requires C*N*Log(N), while other algorithms require more than 2*C*N*Log(N) operations.

Another algorithm using random numbers searches for the median of a group of numbers and its average running time is O(N). This is much faster compared to the algorithm that sorts the numbers and takes the average, and runs in O(N*Log(N)). There are deterministic algorithms (not random) that can find the median in O(N) time, but the random algorithm is easier to understand and often faster than these deterministic algorithms.

The main idea of ​​the median search algorithm is to select a random number among the numbers and count how many numbers in the group are less than the selected number. Let's say there are N numbers, K of them are less than or equal to the selected number. If K is less than half N, then we know that the median is the (N/2-K)th number that is greater than the random number, so we discard K numbers less than or equal to the random number. Now let's say we want to find the (N/2-K)th smallest number, instead of the median. The algorithm is the same, we just randomly select a number and repeat the steps described.

Compression

Another class of algorithms is designed for data compression. This algorithm does not have an expected result (like a sorting algorithm), but instead optimizes according to some criteria. In the case of data compression, an algorithm (for example, LZW) tries to make the data occupy as few bytes as possible, but at the same time, so that it can be decompressed to its original form. In some cases, this type of algorithm uses the same methods as other algorithms, resulting in a good result, but not optimal. For example, JPG and MP3 compress data in such a way that the final result is lower quality than the original, but also smaller in size. MP3 compression does not preserve every feature of the original audio file, but it attempts to preserve enough detail to provide acceptable quality while significantly reducing file size. The JPG format follows the same principle, but the details are significantly different because... the goal is to compress the image, not the audio.

Why is it so important to know algorithms?

To use algorithms properly, it is important to know all the mentioned types of algorithms. If you have to develop an important piece of software, then you should be able to estimate the speed of your algorithm. The accuracy of your estimate depends on how proficient you are in analyzing the execution time of algorithms. In addition, it is necessary to know the details of the algorithms, which will allow us to predict special cases in which the program will not work quickly or will produce unacceptable results.

Of course, there will be times when you stumble upon previously unexplored problems. In such cases, you need to come up with a new algorithm, or apply the old algorithm in a new way. The more you know about algorithms, the more likely you are to find a good solution to a problem. In many cases, a new problem can easily be reduced to an old one, but to do this you need to have a fundamental understanding of the old problems.

As an example, consider how network switches work. The switch has N cables connected to it, and receives data packets coming through these cables. The switch must first parse the packets and then send them back over the correct cable. A switch, like a computer, operates in discrete mode - packets are sent at discrete intervals, rather than continuously. A fast switch strives to send as many packets as possible during each interval, otherwise they will accumulate and the switch will crash. The goal of the algorithm is to send the maximum number of packets during each interval, and also to ensure an order in which packets that arrive earlier than others are also sent earlier than others. In this case, it turns out that an algorithm known as “stable matching” is suitable for solving this problem, although at first glance this may not be obvious. Such connections between problem and solution can only be discovered using existing algorithmic knowledge.

Real examples

There are plenty of examples of solutions to real problems that require the latest algorithms. Almost everything you do on a computer depends on algorithms that someone spent a long time developing. Even the simplest programs would not exist without the algorithms that work behind the scenes to manage memory and load data from the hard drive.

There are dozens of examples of the use of complex algorithms, but we will discuss two problems whose solution requires the same skills as solving some problems on TopCoder. The first problem is known as the maximum flow problem, and the second involves dynamic programming, a technique that can often solve problems at seemingly impossible lightning speeds.

2.4.1. The concept of basic algorithms

2.4.2. Linear structure algorithms

2.4.3. Basic algorithms for branching structures and examples of their programming

2.4.4. Basic algorithms for regular cyclic structures and examples of their programming

2.4.5. Basic algorithms for iterative cyclic structures and examples of their programming

2.4.6. Basic algorithms for processing one-dimensional arrays

2.4.7. Basic algorithms for processing two-dimensional arrays

2.4.8. Test questions on the topic “Basic algorithms and examples of their implementation”

2.4.9. Test tasks on the topic “Basic algorithms and examples of their implementation”

2.4.1. The concept of basic algorithms

Basic data processing algorithms are the result of decades of research and development. But they continue to play an important role in the expanding use of computing processes.

The basic imperative programming algorithms include:

    The simplest algorithms implementing basic algorithmic structures.

    Algorithms working with data structures. They define the basic principles and methodology used to implement, analyze, and compare algorithms. Provides insight into data presentation methods. Such structures include linked lists and strings, trees, and abstract data types such as stacks and queues.

    Algorithms sorting, designed to organize arrays and files, are of particular importance. Related to sorting algorithms are priority queues, selection problems, and merging problems.

    Algorithms search, designed to find specific elements in large collections of elements. These include basic and advanced search methods using trees and digital key transformations, including digital search trees, balanced trees, hashing, and methods that are suitable for working with very large files.

    Graph Algorithms useful in solving a number of complex and important problems. A general graph search strategy is developed and applied to fundamental connectivity problems, including shortest path, minimum spanning tree, network flow, and matching problems. The unified approach to these algorithms shows that they are based on the same procedure, and that this procedure is based on the underlying abstract priority queue data type.

    Algorithms string processing include a number of methods for processing (long) character sequences. Searching a string leads to pattern matching, which in turn leads to parsing. File compression technologies can also be attributed to this class of tasks.

2.4.2. Linear structure algorithms

Example 2.4.2-1.

where x = -1.4; y = 0.8; variables k and k are of integer type, the remaining variables are of real type; [n] is the integer part of the number n.

The diagram of the algorithm and programs in the languages ​​QBasic, Pascal, C++ are presented in Fig. 2.4.2-1.

It should be noted that the integer variable k received a rounded value n, and the integer variable m- truncated using a function FIX() to the whole part of the value n.

Example 2.4.2-2. Calculate and display the following values:

where x = 2.9512; y = 0.098633; variables i and j are of integer type; the remaining variables are of real type.

The algorithm diagram and program codes are presented in Fig. 3.2.1-2.

Rice. 2.4.2-2.

The results of executing the program with the above values ​​of the initial data have the following form:

Example 2.4.2-3. Calculate and display the value of the first escape velocity.

Let's formalize it. The minimum speed at which a spacecraft in the Earth's gravitational field can become an artificial satellite is equal to

where is the gravitational constant; M – mass of the Earth;
– distance from the center of the Earth to the spacecraft.

The algorithm diagram and program codes are presented in Fig. 3.2.1-3.

Rice. 2.4.2-3.

The results of executing the program with the above values ​​of the initial data have the following form.



 


Read:



Slavic calendar kolyada dar

Slavic calendar kolyada dar

Krugolet Slavic-Aryan Calendar The ancient calendar is based on the hexadecimal number system and divides long intervals...

Wireless tablet for audio equipment

Wireless tablet for audio equipment

Using a Bluetooth receiver, you can broadcast music over the air from a tablet or smartphone to any speaker system, even wired...

How to find any person's personal or work email

How to find any person's personal or work email

As Softodrom has already said, the Mail.Ru Group company is well known for its concern for the privacy and confidentiality of users, and therefore...

Air conditioner user manual remote control (split systems) Duct air conditioners MDV

Air conditioner user manual remote control (split systems) Duct air conditioners MDV

Size: px Start showing from page: Transcript 1 USER INSTRUCTIONS REMOTE CONTROL FOR AIR CONDITIONER (SPLIT SYSTEM)...

feed-image RSS