Министерство образования Республики Беларусь БЕЛОРУССКИЙ НАЦИОНАЛЬНЫЙ ТЕХНИЧЕСКИЙ УНИВЕРСИТЕТ ФАКУЛЬТЕТ ГОРНОГО ДЕЛА И ИНЖЕНЕРНОЙ ЭКОЛОГИИ СБОРНИК МАТЕРИАЛОВ 74 – й студенческой научно-технической конференции 12 апреля 2018 г. Электронное издание Минск 2018 2 УДК 811.111 ББК 81.2Англ. С23 Сборник материалов 74-й студенческой научно- технической конференции / под общ. ред. Хоменко С.А., Личевской С.П.// БНТУ, Минск, 2018. – 234 с. ISBN 978-5-7679-2355-7 Р е ц е н з е н т Заведующий кафедрой английского языка естественных факультетов БГУ кандидат филологических наук, доцент А.Э. Черенда В сборник включены материалы докладов 74-й студенческой научно-технической конференции по секции «Английский язык». Белорусский национальный технический университет. Факультет горного дела и инженерной экологии. Пр-т Независимости, 65, уч. корп. 9, г. Минск, Республика Беларусь. Тел.: (017) 331-40-52 E-mail: eng1@tut.by http://www.bntu.by/fgde.html Регистрационный № БНТУ/ФГДЭ ©Хоменко С.А., компьютерный дизайн, 2018 © БНТУ, 2018 3 Оглавление Sivkova K., Akylich T. Types of Computer Graphics ................ 7 Adaskevich V., Akulich T. The Foundation of Silicon Valley .......................................................................... 11 Kevra E., Yazdani Cherati A., Bankovskaya I. The Greatest Inventions of Elon Musk ............................................................ 15 Kozlova L., Nekrashevich D., Bankovskaya I. Pentium II Xeon Processor ..................................................................................... 19 Kubarskiy M., Borodin A., Bankovskaya I. Importance of Information Security in Organizations ..................................... 23 Shpakovsky E., Tretyakevich M., Bazyleva I. Teleportation as One of the Mysteries of Our Time ............................................ 27 Gutyra A., Vychik F., Bazyleva I. Virtual and Augmented Reality .......................................................................................... 30 Bobnis U., Kovalikhin A., Bazyleva I. IT Industry of the Republic of Belarus .................................................................... 34 Guevich M., Beznis Y. Production and Recycling of Aluminium .............................................................................. 38 Podgorny A., Rachko E., Beznis Y. Welding Manipulators in Shipbuilding ................................................................................ 42 Achinovich V., Barankevich N., Beznis Y. Coke Production for Blast Furnace Ironmaking ......................................................... 46 Borodach V., Vasilenya M., Beznis Y. Software. Notion and Development ................................................................................ 50 Silich V., Boyarskaya A. Three-dimensional Machine-vision Measurement System ................................................................. 54 Pavlov V., Lameko P., Boyarskaya A. World-wide Application of the TIR System ....................................................................... 57 Nikitina M., Yurko E., Boyarskaya A. Green Transportation ............................................................................ 60 4 Nemchenko A., Boyarskaya A. Current Trends in Container Shipping Industry ....................................................................... 64 Sidorova D., Bozhko Y., Vanik I. The Prospects of Smart Grid in Belarus ..................................................................................... 68 Oshukovskaya O., Vanik I. Gun Control Should Be Stricter.................................................................................... 72 Dovzhenko P., Vasilieva T. Autonomous Cars: Future or Reality? ........................................................................................ 75 Butakova A., Ladutska N. How Information Technologies Impact Transportation ............................................................... 79 Panova T., Lulenko K., Ladutska N. Intermodal Transport as a Way to Reduce Costs .................................................................. 83 Khadasevich U., Ladutska N. How to Ship a Car Easily and Affordably ................................................................................... 87 Ganushchenko A., Lichevskaya S. Upcoming Technology ........ 91 Kirilyuk A., Mandik N., Lichevskaya S. 5 Ideas of Elon Musk ................................................................................... 95 Kukshinov A., Lichevskaya S. Body Language .......................... 99 Koval D., Lapko O. Development of Technological Documentation for Maintenance and Repair Using a Modular Approach ................................................................................... 102 Vasilieva N., Podgurskaya V., Lapko O. Bull Position vs Bear Position ...................................................................................... 106 Krapivin S., Makarevich V., Matusevich O. Robots versus Artificial Intelligence ................................................................ 110 Ostreyko A., Matusevich O. Nuclear Power Stations .............. 114 Panteley D., Matusevich O. Sahara Forest Project .................. 118 Kovtun G., Soloviov S., Matusevich O. Energy Production from Waste ......................................................................................... 122 Monich K., Nikitin Y., Matusevich O. Piezoelectricity ............ 125 5 Yaroshevich E. Mileiko A. Chemical Elements Used in Engineering ............................................................................... 129 Kachina V. Mileiko A. Grey Cast Iron and White Cast Iron .................................................................................... 132 Nazarov D., Mileiko A. Gas Tungsten Arc Welding ............... 135 Aristova D., Molchan O. Facial Recognition Using Convolutional Neural Networks .............................................. 140 Golubev A., Molchan O. Machine Learning and Genetic Algorithms ................................................................................. 142 Korotkevich V., Molchan O. The Internet of Things............... 145 Kosyakova D., Molchan O. What Is HTTPS and What Does It Do? ......................................................................................... 148 Poleshchuk E., Molchan O. Computer Graphics ..................... 151 Rosetskaya A., Murauyeva A. Artificial Intelligence Technology ................................................................................ 156 Stanilko M., Linkevich M., Murauyeva A. Interactive Mirrors ...................................................................................... 160 Bulatovsky V., Pedko L. Four-Wheel Steering System ........... 163 Savenkov A., Pedko L. The Procedure of Vehicle Certification in Belarus ................................................................................... 166 Savenkov A., Pedko L. Eternal Roads of the Future. Plastic Roads ............................................................................. 169 Nemchenko A., Pedko L. Container Lift System ..................... 173 Svirski R., Piskun O. Electric Car ............................................ 176 Shulga D., Yakubovich A., Piskun O. Tulip Mania: When Tulips Cost as Much as Houses ........................................................... 179 Motorin R., Pigulsky M., Piskun O. Russian Soldier of the Future ........................................................................................ 183 Cherkashin N., Nesterovich R., Piskun O. Humanitarian Demining ................................................................................... 187 6 Shevcov N., Buk I., Piskun O. The Development of Military Engineering ............................................................................... 190 Baskleev Y., Dudchenko G., Piskun O. The Biggest Scam in the History ....................................................................................... 194 Stoiko Y., Rybaltovskaya E. Industry 4.0 ................................. 196 Andreev D., Akulov S., Slesarenok E. Web Development ....... 200 Savchits D., Slesarenok E. Industrial Design ........................... 203 Laptsionak U., Slesarenok E. “Minsk” Family of Computers ............................................................................. 206 Shimanovitch M., Slesarenok E. Stadium Construction .......... 210 Goncharevich V., Slesarenok E. Tires ....................................... 213 Kabushkin Ph., Slesarenok E. Inside a CPU ................................ 217 Kapustsinski A., Khomenko S. Data Сenters’ Electric Power Supply ........................................................................................ 219 Papkova N., Khomenko S. Alternative Energy Potential of the Republic of Belarus .................................................................. 224 Savenkov A., Khomenko S. A New Way of Transporting Cars by Rail ........................................................................................ 228 Tsybulkin P., Yalovik E. Electric Drive at the Basis of a Permanent Magnet Motors and Methods of Controlling Them ..................................................................... 230 Herasimionak A., Yalovik E. Importance of Implementing a Measurement Management System in Companies of the Republic of Belarus .................................................................. 233 7 УДК 811.111:004.92 Sivkova K., Akylich T. Types of Computer Graphics Belarusian National Technical University Minsk, Belarus Computer graphics are pictures and films created using computers. It is a vast and recent area in computer science. The phrase was coined in 1960, by computer graphics researchers Verne Hudson and William Fetter of Boeing. It is often abbreviated as CG [1]. Some topics in computer graphics include user interface design, sprite graphics, vector graphics, 3D modeling, shaders, GPU design, implicit surface visualization with ray tracing, and computer vision, among others. The overall methodology depends heavily on the underlying sciences of geometry, optics, and physics. Computer graphics is responsible for displaying art and image data effectively and meaningfully to the user. It is also used for processing image data received from the physical world [2]. Two-dimensional computer graphics are the computer-based generation of digital images. 2D computer graphics are mainly used in applications that were originally developed upon traditional printing and drawing technologies such as typography. In those applications, the two- dimensional image is independent artifact with added semantic value; two-dimensional models are therefore preferred because they give more direct control of the image than 3D computer graphics, whose approach is more akin to photography than to typography [3]. Fractal art is a form of algorithmic art created by calculating fractal objects and representing the calculation 8 results as still images, animations, and media. Fractals are different from other geometric figures because of the way in which they scale. Fractal art developed from the mid-1980s onwards. The mathematical beauty of fractals lies at the intersection of generative art and computer art. Fractal art is usually created indirectly with the assistance of fractal- generating software, iterating through three phases: setting parameters of appropriate fractal software; executing the possibly lengthy calculation; and evaluating the product [1]. A bitmap, a single-bit raster, corresponds bit-for-bit with an image displayed on a screen, generally in the same format used for storage in the display’s video memory. A raster is technically characterized by the width and height of the image in pixels and by the number of bits per pixel. Raster graphics are best used for non-line art images; specifically digitized photographs, scanned artwork or detailed graphics [4]. Vector graphics represent an image as a set of geometric primitives. Usually they are selected as points, straight lines, circles, rectangles, as well as a General case, splines of some order. Some attributes are assigned to objects, such as line thickness and fill color. A drawing is stored as a set of coordinates, vectors, and other numbers that characterize a set of primitives. When playing back overlapping objects, their order is set. Image in vector format gives space for editing. The image can be losslessly scaled, rotated, deformed, and as an imitation of three-dimensionality in vector graphics easier than in a raster. The mathematical description of the vector pattern remains the same, only the values of some variables, such as coefficients, are changed. When you convert bitmap source data is only a description of the set of pixels, therefore there is the problem of replacement of a smaller number of pixels on a larger (zoomed in) or larger to smaller (decreasing) [1]. The choice of bitmap or vector format depends on the goals and tasks of working with the image. If you need 9 photographic accuracy, it is preferable to raster. Logos, schemes, design elements are more convenient to be presented in vector format. It is clear that in both raster and vector representation graphics (as well as text) are displayed on the screen of the monitor or the printing device as a set of points. On the Internet graphics is represented in one of the raster formats understood by browsers without installing additional modules – GIF, JPG, PNG [5]. However, there is a tendency towards convergence. Most modern vector editors are able to use bitmap images as a background, or even translate into vector format parts of the image using built-in tools (tracing). And usually there are tools for editing the loaded background image at least at the level of various built-in or installed filters. 8-I version Illustrator’s is download .psd files Photoshop’s and use each of the resulting layers. In addition, to use the same filters, the generated vector image can be translated directly into raster format and then used as a non-editable raster element. Moreover, all this in addition to the usually available converters from vector format to raster format to obtain the appropriate file [2]. Three-dimensional graphics operate on objects in three- dimensional space. Usually the results are a flat picture, a projection. Three-dimensional computer graphics is widely used in movies, computer games [1]. In three-dimensional computer graphics, all objects are usually represented as a set of surfaces or particles. The minimum surface is called a polygon. Triangles are usually chosen as the ground [4]. Three types of matrices are used in computer graphics: rotation matrix; translation matrix; scaling matrix. The study of computer graphics is a sub-field of computer science which studies methods for digitally synthesizing and manipulating visual content. Although the term often refers to three- dimensional computer graphics, it also encompasses two- dimensional graphics and image processing [2]. 10 As an academic discipline, computer graphics studies the manipulation of visual and geometric information using computational techniques. It focuses on the mathematical and computational foundations of image generation and processing rather than purely aesthetic issues. Computer graphics is often differentiated from the field of visualization, although the two fields have many similarities [5]. Any image on the monitor, because of its plane, becomes a raster, as the monitor is a matrix, it consists of columns and rows of three – Dimensional graphics exists only in our imagination, as what we see on the monitor is a projection of a three-dimensional figure, and we create space ourselves. Thus, the visualization of graphics is only raster and vector, and the method of visualization is only a raster, and the number of pixels depends on the method of specifying the image [3]. References: 1. Computer Graphics [Electronic resource]. – Mode of access: http://wikimedia.ru. – Date of access: 10.03.2018. 2. Types of Computer Graphics [Electronic resource]. – Mode of access: http://informatikaiikt.narod.ru. – Date of access: 10.03.2018. 3. Types of Computer Graphics [Electronic resource]. – Mode of access: http://imped.vgts.ru. – Date of access: 10.03.2018. 4. Types of Computer Graphics [Electronic resource]. – Mode of access: http://project68.narod.ru. – Date of access: 11.03.2018. 5. Types of Computer Graphics [Electronic resource]. – Mode of access: http://flashmaker.8m.ru. – Date of access: 10.03.2018. 11 УДК 004.3’1:811.111 Adaskevich V., Akulich T. The Foundation of Silicon Valley Belarusian National Technical University Minsk, Belarus Silicon Valley – what is that? Silicon Valley is the heartland of the microelectronics industry that is based on semiconductors. Geographically, it is the northern part of the Santa Clara County, an area stretching from the south end of the San Francisco Bay Area to San Jose, limited by the Santa Cruz Mountains in the west and the northern part of the Diablo Range in the east. The name Silicon Valley was coined in 1971 by Don C. Hoefler. Silicon was chosen because it is the material from which semiconductor chips are made, which is «the fundamental product of the local high-technology industries». Silicon Valley saw the «development of the integrated circuit, the microprocessor, the personal computer and the video game» and has spawned a lot of high-tech products such as pocket calculators, cordless telephones, lasers or digital watches. The image of Silicon Valley is the nucleus of modern computing, presenting the most important events, which comprise the developments of the three major companies Hewlett-Packard, Intel and Apple [1]. The story of the Silicon Valley starts with Stanford University in Palo Alto. In 1887, Leland Stanford, a wealthy railroad magnate who owned a large part of the Pacific Railroad, decided to dedicate a university to his son’s memory who had died due to a severe disease shortly before he intended to go to a university. 12 Frederick Terman, who was the progenitor of the initial Silicon Valley boom, today is also known as the «godfather of Silicon Valley». Terman became head of the department of engineering by 1937 and established a stronger cooperation between Stanford and the surrounding electronics industry to stop the brain drain caused by many students who went to the East after graduation, as they did not find a job in California then. HP Company. Hewlett-Packard was one of the first companies to be founded in the Silicon Valley and has today become the largest one to be seated there. Its story is typical for this Valley and has had a great impact on many firms founded later on. Bill Hewlett and David Packard met at Stanford University in 1934. Bill Hewlett was the «son of the dean of the Stanford Medical School, while Dave Packard had come to Stanford from Pueblo, Colorado», and was an enthusiastic radio ham. The new firm Hewlett-Packard (HP) was founded in 1939 and its first big sale were eight audio oscillators to Walt Disney Studios, which used them for the soundtrack of Fantasia. Bill Hewlett and Dave Packard have spent millions of their profits for social welfare and have established the Hewlett-Foundation. Hewlett and Packard have set a pattern of an outstanding company against which every new high- technology firm «must be measured». Intel Corporation. After the transistor and the integrated circuit, the invention of the microprocessor in the early 1970s represents the next step towards the modern way of computing, providing the basis for the subsequent personal computer revolution. It was at Intel where the first microprocessor was designed – representing the key to modern personal computers. With its logic and memory chips, the company provides the basic components for microcomputers. Intel is regarded as 13 Silicon Valley’s flagship and its most successful semiconductor company. Intel was founded in Mountain View, California in 1968 by Gordon E. Moore (of Moore's law fame), a chemist, and Robert Noyce, a physicist and co-inventor of the integrated circuit. Intel’s third employee was Andy Grove, a chemical engineer, who later ran the company through much of the 1980s and the high-growth 1990s. Intel (short for Integrated Electronics), a typical Fairchild spin-off, was financially backed by venture capital from Arthur Rock, who had been in contact with Noyce since 1957. Intel’s first really successful product was the 1103 dynamic random access memory (DRAM). 1971 was a crucial year at Intel. The company’s revenues surpassed operating expenses for the first time, and the company went public, raising $6.8 million. The invention of the microprocessor marked a turning point in Intel’s history. This development «changed not only the future of the company, but much of the industrial world» [2]. Apple Computer. Apple’s history starts with the story of two young and exceptional people who began building a computer in their garage and launched the microcomputer revolution, changing our daily life in many respects. The Apple story is the story of the two Steves. Stephen G. Wozniak was a typical Silicon Valley child. Born in 1950, he had grown up with the electronics industry in Silicon Valley, and had been intrigued by electronics from the start, since his father was an electronics engineer. In 1971, Wozniak built his first computer with his high- school friend Bill Fernande (they called it Cream Soda Computer). Bill introduced Woz to a friend of his named Steven P. Jobs. Jobs was born in 1955, and his foster parents were – unlike most other people in Silicon Valley – blue-collar workers. However, growing up in an environment full of 14 electronics, Steve came in con tact with this fascinating technology and was caught by it. Apple was founded by Steve Jobs, Steve Wozniak, and Ronald Wayne in April 1976 to develop and sell Wozniak’s Apple I personal computer. It was incorporated as Apple Computer, Inc. in January 1977, and sales of its computers, including the Apple II, saw significant momentum and revenue growth for the company. Jobs himself was the driving force that brought the key components together to build up a successful company. Silicon Valley is full of amazing and exciting stories. The Valley is a place of active entrepreneurism, and is home to thousands of smaller companies manufacturing a variety of electronics products. Whatever innovated developments the future may bring along, the nucleus for modern computing remains that «kingdom built on sand, the main material of which is silicon – primarily found in sand» [3]. References: 1. Silicon Valley as the nucleus of the modern way of computing [Electronic resource]. – Mode of access: www.silicon-valley-story.de. – Date of access: 10.03.2018. 2. From the Gold Mines of El Dorado to the “Golden” Startups of Silicon Valley [Electronic resource]. – Mode of access: www.silicon-valley-history.com. – Date of access: 13.03.2018. 3. A New Home for the Mind? [Electronic resource]. – Mode of access: www.netvalley.com. – Date of access: 15.03.2018. 15 УДК 62(092):811.111 Kevra E., Yazdani Cherati A., Bankovskaya I. The Greatest Inventions of Elon Musk Belarusian National Technical University Minsk, Belarus 1. Tesla Motors Tesla, Inc. (formerly Tesla Motors) is an American company that specializes in electric vehicles, energy storage and solar panel manufacturing based in Palo Alto, California. Founded in 2003, the company specializes in electric cars, lithium-ion battery energy storage, and residential photovoltaic panels. The additional products Tesla sells include the Tesla Powerwall and Powerpack batteries, solar panels and solar roof tiles. CEO Elon Musk said that he envisions Tesla as a technology company and independent automaker, aimed at eventually offering electric cars at prices affordable to the average consumer. The company was named after the electrical engineer and physicist Nikola Tesla by company co- founders Martin Eberhard and Marc Tarpenning. The company's Model S was the world's best- selling plug-in electric car in 2015 and 2016. Global sales of the Model S reached the 200,000 unit milestone during the fourth quarter of 2017. In September 2015, the company released its Model X, a crossover SUV. The Model 3 was released in July 2017. Tesla production passed 300,000 vehicles in February 2018. Tesla operates multiple production and assembly plants, notably Gigafactory 1 near Reno, Nevada and its main vehicle manufacturing facility at Tesla Factory in Fremont, California. 16 The Gigafactory primarily produces batteries and battery packs for Tesla vehicles and energy storage products [1]. 2. SpaceX Space Exploration Technologies Corp., doing business as SpaceX, is a private American aerospace manufacturer and space transport services company headquartered in Hawthorne, California. It was founded in 2002 by entrepreneur Elon Musk with the goal of reducing space transportation costs and enabling the colonization of Mars. SpaceX has since developed the Falcon launch vehicle family and the Dragon spacecraft family, which both currently deliver payloads into Earth orbit. SpaceX's achievements include the first privately funded liquid-propellant rocket to reach orbit (Falcon 1 in 2008), the first privately funded company to successfully launch, orbit, and recover a spacecraft (Dragon in 2010), the first private company to send a spacecraft to the International Space Station (Dragon in 2012), the first propulsive landing for an orbital rocket (Falcon 9 in 2015), the first reuse of an orbital rocket (Falcon 9 in 2017), and the first privately funded space agency to launch an object into solar orbit (Falcon Heavy's payload of a Tesla Roadster in 2018). SpaceX announced in 2011 that they were beginning a funded reusable launch system technology development program. In December 2015, a first stage was flown back to a landing pad near the launch site, where it successfully accomplished a propulsive vertical landing. This was the first such achievement by a rocket for orbital spaceflight [2]. In April 2016, with the launch of CRS-8, SpaceX successfully vertically landed a first stage on an ocean drone ship landing platform. In September 2016, CEO Elon Musk unveiled the mission architecture of the Interplanetary Transport System program, an ambitious privately funded initiative to 17 develop spaceflight technology for use in manned interplanetary spaceflight. If demand emerges, this transportation architecture could lead to sustainable human settlements on Mars over the long term. 3. PayPal PayPal Holdings, Inc. is an American company operating a worldwide online payments system that supports online money transfers and serves as an electronic alternative to traditional paper methods like checks and money orders. The company operates as a payment processor for online vendors, auction sites, and other commercial users, for which it charges a small fee in exchange for benefits such as one-click transactions and password memory. Established in 1998 as Confinity, PayPal had its initial public offering in 2002, and became a wholly owned subsidiary of eBay later that year. In 2014, eBay announced plans to spin-off PayPal into an independent company by mid-2015 and this was completed on July 18, 2015. In 2018, eBay announced that after the existing eBay- PayPal agreement ends in 2020, PayPal will remain a payment option for shoppers on eBay, but it won’t be prominently featured ahead of debit and credit card options as it is today. PayPal will cease to process card payments for eBay at that time. 4. Hyperloop A Hyperloop is a proposed mode of passenger and/or freight transportation, first used to describe an open- source vactrain design released by a joint team from Tesla and SpaceX. Drawing heavily from Robert Goddard's vactrain, a hyperloop is a sealed tube or system of tubes through which a pod may travel free of air resistance or friction conveying people or objects at high speed while being very efficient. 18 Elon Musk's version of the concept, first publicly mentioned in 2012, incorporates reduced-pressure tubes in which pressurized capsules ride on air bearings driven by linear induction motors and air compressors. The Hyperloop Alpha concept was first published in August 2013, proposing and examining a route running from the Los Angeles region to the San Francisco Bay Area roughly following the Interstate 5 corridor. The paper conceived of a hyperloop system that would propel passengers along the 350- mile (560 km) route at a speed of 760 mph (1,200 km/h), allowing for a travel time of 35 minutes, which is considerably faster than current rail or air travel times. Preliminary cost estimates for this LA–SF suggested route were included in the white paper – US$6 billion for a passenger-only version, and US$7.5 billion for a somewhat larger-diameter version transporting passengers and vehicles – although transportation analysts had doubts that the system could be constructed on that budget; some analysts claimed that the Hyperloop would be several billion dollars over budget, taking into consideration construction, development and operation costs [3]. The Hyperloop concept has been explicitly open-sourced by Musk and SpaceX, and others have been encouraged to take the ideas and further develop them. References: 1. Mode of access: https:/ en.wikipedia.org/wiki/Tesla, Inc. – Date of access: 17.03.2018. 2. Mode of access: https://en.wikipedia.org/wiki/SpaceX. – Date of access: 19.03.2018. 3. Mode of access: https://www.indiatimes.com/lifestyle/self/11-great-inventions- by-elon-musk-that-are-changing-the-world-we-live-in- 334060.html. – Date of access: 19.03.2018. 19 УДК 004.318:811.111 Kozlova L., Nekrashevich D., Bankovskaya I. Pentium II Xeon Processor Belarusian National Technical University Minsk, Belarus The most important component of any personal computer is its microprocessor. This element largely determines the capabilities of the computing system and, figuratively speaking, is its heart. To date, Intel remains the undisputed leader in the creation of modern processors [1]. Since the beginning of July 1998, a series of events dedicated to the presentation of the most powerful processor architecture of Intel Corporation's h86 have been held around the world. Long before that, the information posted on Intel Web-sites became known for its name and purpose. It was emphasized that the word Xeon should be pronounced gently as Zeon, but the Russian mission decided to subordinate this name to the norms of the Russian (and Greek) language. The new processor, by the way, was a gift to the manufacturing company itself on the occasion of the thirtieth anniversary. The first thing that catches the eye – an unusually large size of the processor cartridge which is Packed Xeon. It is designed to fit into the connector new design of Slot 2. According to the developers, this is due to the increase in cache memory capacity of the second level. At the moment the Xeon processors with a common clock frequency supplied in two versions: 512 KB and 1 MB L2 cache. But this year it is planned to increase the capacity of the second level cache to 2 Mbytes and increase the clock frequency to 450 MHz. Let me 20 remind you that the old Pentium II was completed with only 512 Kbytes. The high frequency of the cache caused an increase in the heat transfer of the processor unit, so it took the use of a massive heat-absorbing plate, which, in turn, led to an increase in the weight and dimensions of the module. In each Slot 2 module, there are three special data areas: a read-only area, a read/write area, and dynamic temperature information inside the processor module. The first type contains information about the processor version, information about step-by-step debugging, and the maximum allowed temperature. In the second area of memory, users can enter their information. Access to dynamic temperature data allows control programs to notify the administrator about dangerous system events. Increasing the capacity of the second-tier cache increases system throughput by allowing processors to instantly access frequently used data and instructions stored in fast cache memory. According to Intel, the increase in cache capacity from 512 KB to 1 MB sometimes leads to a 20% increase in the overall performance of the system. To explain this phenomenon, it is appropriate to draw an analogy with the refrigerators used by Intel: storing food in the refrigerator eliminates the need for restaurant chefs to go shopping, buying provisions. The larger the refrigerator, the better, especially at peak times, when the number of customers in the restaurant increases sharply. So, in the case of the server refrigerator – a cache memory of the second level, and store (where the same products are available) – in principle, slower system memory. A large L2 cache significantly improves the overall performance of multiprocessor configurations on systems that run large arrays of incomparable data. According to Intel, the Corporation carried out the ZD Server Bench tests showed a 21 nearly proportional increase of system performance as installation of additional processors with MB cache. The advanced Xeon architecture, which allows 36-bit addressing of physical memory, theoretically allows the processor to access system memory up to 64 GB. The new mechanism of page-to-Page exchange Page Size Extension-36 will remain almost invisible to the eyes of the user and application developers. Currently PSE-36 support operating systems Windows NT, SCO UnixWare and Sun Solaris. For other operating systems, you will need to update the memory management unit driver. The Intel 450NX PCIset has become the first chip set that is optimized for Pentium II Xeon processors. It is available in two versions, Basic and Full, respectively for hi-end server and midrange systems. They have the same core structure, but differ in performance and price. Basic PCIset supports up to two 32-bit PCI slots, one 64 - bit and up to 4 GB of EDO-type system memory. It is more advanced relative Full PCIset supports up to four EDO type slots. These chipsets combine the 100-megahertz frequency operation of the system bus and the ability to support multiprocessor (up to four Xeon) configurations. 64-bit PCI bus is able to significantly improve the overall performance of the system including fibre optic technology of data exchange with disk arrays, the use of high performance network backbones based on ATM, Gigabit Ethernet, and others. The synchronization of processor power and I/O subsystem performance is essentially increasing. Another feature of the chipset 440GX was the ability to access memory capacity of up to 2 GB, which is twice more than its receiver. Despite the fact that currently the concept of multi- processor associated with Intel only four devices on the same Board, work is underway on the creation of a symmetrical 22 multiprocessor system, supporting up to eight Seonow. The development of the eight-channel chipset for Xeon is conducted by Corollary, a subsidiary of Intel. And, of course, possible cluster solutions, for example, based on the architecture of distributed memory (NUMA). In both cases, as a rule, you do not need to rewrite application programs (however, the operating system requires some optimization). The processor bus chipset Intel 450NX PCIset provides a so- called cluster connection connector, which makes it easy to build a cluster connection based on standard four-processor nodes. Another promising direction is to cluster with message passing. The essence of it is the lack of separation of resources. Stand-alone cluster nodes exchange data, such as clock pulses, indicating the normal state of the system. Although the LAN connection remains functional, there is a need for a new type of network – the so called SAN (System area Network) [2]. In conclusion, I would like to note that some leading Western manufacturers (IBM, NCR, Dell) have already started supplying systems based on Xeon, and at the presentation of the processor in Russia Kraftway and Vist also presented their new server solutions. Approximate prices on Pentium Xeon will be $ 1124 (L2 512 KB) and 2836 $ (L2 1MB) in the supply of thousands of pieces. References: 1. Mode of access: https://en.m.wikipedia.org/wiki/Pentium_ II. – Date of access: 24.02.2018. 2. Mode of access: https://books.google.by/books?id=2ogntwEACAAJ&dq=Syste m+area+Network&hl=ru&sa=X&ved=0ahUKEwjI47_XmajbA hXkK5oKHdU9C9AQ6AEIJjAA. – Date of access: 27.02.2018. 23 УДК 004.056:811.111 Kubarskiy M., Borodin A., Bankovskaya I. Importance of Information Security in Organizations Belarusian National Technical University Minsk, Belarus Information is one of the most important organization assets. For an organization, information is valuable and should be appropriately protected. Security is to combine systems, operations and internal controls to ensure integrity and confidentiality of data and operation procedures in an organization. Information security history begins with the history of computer security. It started around year 1980. In 1980, the use of computers has concentrated on computer centers, where the implementation of a computer security focuses on securing physical computing infrastructure that is highly effective organization. Although the openness of the Internet enabled businesses to adopt quickly its technology ecosystem, it also proved to be a great weakness from an information security perspective. The system’s original purpose as a means of collaboration between groups of trusted colleagues is no longer practical because the usage has expanded into millions of frequently anonymous users. Numerous security incidents related to viruses, worms, and other malicious software have occurred since the Morris Worm, which was the first and shut down 10% of the systems on the Internet in 1988. These incidents have become increasingly complex and costly. However, the information security awareness has been increases. Many organizations have implemented the information security to protect their data. 24 In general, information security can be defined as the protection of data that owned by an organization or individual from threats and or risk. According to Merriam-Webster Dictionary, security in general is the quality or state of being secure, that is, to be free from harm. Information security is the collection of technologies, standards, policies and management practices that are applied to information to keep it secure. The information security performs four important functions for an organization which is enables the safe operation of application implemented on the organization’s Information Technology (IT) systems, protect the data the organizations collects and use, safeguards the technology assets in use at the organization and lastly is protect the organization’s ability to function. There are five theories that determine approach to information safety management in organization: - Security policy theory Aims to create implement and maintain an organization's information security needs through security policies. - Risk management theory Evaluates and analyzes the threats and vulnerabilities in an organization's information assets. It also includes the establishment and implementation of control measures and procedures to minimize risk. - Control and audit theory Suggest that organization need establish control systems (in a form of security strategy and standard) with periodic auditing to measure the performance of control. - Management system theory Establishes and maintains a documented information security management system. This will include information security policies that combine internal and external factors to 25 the organization that scope to the policy, risk management and implementation process. - Contingency theory Information security is a part of contingency management to prevent, detect and respond to threats and weaknesses capabilities of internal and external to the organization. Employees should know their boundaries. They should know to differentiate their personal life and their job. They should not taking advantages by used company facilities for their personal. This is because they can encourage the threat attack and makes the organizations’ information is in risk. Organization should explain about this to the staff to let the staff know what they can and cannot. The employees should be explained about the rules and ethics in the workplaces before they start their works. The organization should establish, implement and maintenance the policies about the information security. This is to ensure the employees follow the rules to access to the information. In order to increase the awareness on security issues among the employees, the organization should take several steps to improve the employees’ awareness and understanding on the important information security. Method that could be taken by the organization is by give education to their employees about the protection of data and gives the training to the staff about the way to protect the data. By implement these methods, the employees can have better understanding about information security and also can protect the information well. Employees must understand and accept the risks that come with using technology and the Internet in particular. The employees and organizations’ personnel must ensure that the organizations computer network is securely configured and actively managed against known threats. IT network professional also should help organization maintain a secure 26 virtual environment by reviewing all computer assets and determining a plan for preventive maintenance. This includes routinely cleaning up unnecessary or unsafe programs and software, applying security patches such as small pieces of software designed to improve computer security, and performing routine scans to check for intrusions. Organization also may review access rights and have the IT professional set up an automated procedure that requires the employees to change their passwords at regular intervals to further protection organization information assets. Besides that, the computer system should install updated and latest protected program such as the updated antivirus to protect the computer from viruses attacks. To protect and secure the confidential information well, the organization should hiring the IT experts and employees that have the right qualification to protect the data. This is to ensure the employee know what to do if problem occurs and to protect the data as well. Besides that, the IT expert or the qualification staff have better understanding of information security and know the steps to ensure the information is always keeping safely. When employees is lack of information security knowledge in term of keeping their information, the organization is easy to be attacked by hackers or another threats that try to stole or get the organization confidential information [1]. In conclusion we may summarize that it is crucial and important to all staff in an organization to have knowledge and understanding about the importance information security practice in an organization to protect the confidential data. References: 1. Mode of access: https://www.uniassignment.com/essay- samples/information-technology.php. – Date of access: 07.03.2018. 27 УДК 530.16 Shpakovsky E., Tretyakevich M., Bazyleva I. Teleportation as One of the Mysteries of Our Time Belarusian National Technical University Minsk, Belarus Teleportation is a hypothetical change in the coordinates of an object (displacement), in which the trajectory of an object can’t be described by any mathematical law. The term was introduced in 1931 by the American writer Charles Fort to describe strange disappearances and appearances, paranormal phenomena, which, in his opinion, had something in common [1]. Putting it in simple words, teleportation is a momentary movement of any object from one point of the globe to another. Nowadays, there are two camps, corresponding to two types of teleportation: quantum (for inanimate objects) and hole (for a person). The essence of quantum teleportation lies in the fact that a certain channel is created (for the time being called a quantum channel), according to which the object A transfers its properties and form to the object A1, and A1 duplicates all parameters of A. After that, A is destroyed, and its absolute double continues to exist in the chosen form in the transfer place. Scientists at the University of Aarhus (Denmark) in 2001, using the example of gas clouds, proved the possibility of quantum teleportation. And at the same time, they found out that quantum teleportation occurs in four stages. First, the scanning takes place, the original is being read, after which, in the second stage, there is a disassembly – splitting and translating the information about the object into a certain code, and on the third stage there is a transfer of the code to the selected place for assembly, and at the end there is the reconstruction at the final point. However, this way of 28 teleportation was performed over inanimate gas clouds and it is considered impossible to transfer a person using this method. And there are a number of reasons for this. The first one is that the process of encryption and data processing is already stretching too much over time and it is difficult to say for the time being how long the connection between the disassembly point and the assembly point will remain. After all, in the Danish experiments with gas clouds, communication lasted only thousandths of a second. It seems unlikely that the model and structure of the reconstructed object will preserve order and organics of the original. How will the structures associated with the neurons of the brain and spinal cord behave? And moreover, there is consciousness. Will the impulsive connections in the body, the direction of the blood flow in the vessels, and be preserved accurately with such a transition? Or we will get a formless biomass as a result? The method of hole teleportation implies the presence of so-called zero-transitions, serving as transition doors, which are either discovered or created. This method is more appropriate for a human being and it is the safest one since there is no disassembly of the body and its integrity and structure are being preserved. The biggest disadvantage of hole-type teleportation is the uncertainty of the place of displacement, materialization. But like any other idea of science fiction, teleportation has its drawbacks. Firstly, life would become boring and inactive, people would simply stop moving, and secondly, it would cause a significant blow to the economy: road taxes would be lost, the work of customs would become unnecessary, the manufacturers of vehicles would lose their profits. So, until there is a well-established system of control over instantaneous movements, there will be no need to talk about providing this technology to society. However, historical chronicles show that it is no necessity to create any technology to teleport. Two cases can be recalled. In the 1st century AD Emperor Domitian made a 29 trial of the philosopher Apollonius in Rome. The defendant disappeared from the courtroom in front of the emperor and the assessors and appeared the same day within a few days journey from Rome. And the second case was as follows: the incident occurred with the soldiers of Alexander the Great. This happened in Egypt with a small reconnaissance detachment of riders sent by Alexander to the reconnaissance. The detachment had not yet managed to hide behind the nearest hill, as it suddenly disappeared in front of the whole army. The great commander sent after them another detachment to find out what had happened, but they did not find anything, except for the sharply interrupted tracks. In our time, there have been many studies on that place. Maybe there were some caves or pits in which riders could have fallen, but neither caves nor pits were found [2]. Summing it up, we can say that teleportation is one of the most grandiose ideas of our time. Its advantages are indisputable and the possibility of teleportation would certainly turn the whole world, but despite this fact at this stage it is still too early to talk about providing this technology to society. So, for the time being we will observe this amazing technology in action only in books and films. References: 1. Teleportation [Electronic resource]. – Mode of access: https://ru.wikipedia.org/wiki. – Date of access: 14.04.2018. 2. Телепортация – исторические факты, изучение [Электронный ресурс]. – Режим доступа: https://inkusto.com/stati/chelovek-vo-vselennoj/104- teleportatsiya-istoricheskie-fakty-izuchenie. – Дата доступа: 28.03.2018. 30 УДК 004.946:811.111 Gutyra A., Vychik F., Bazyleva I. Virtual and Augmented Reality Belarusian National Technical University Minsk, Belarus The industry of virtual and augmented reality is one of the most trending nowadays. Numerous companies and startups develop complex devices that can create detailed virtual worlds and enhance our understanding of ordinary objects. Virtual reality (VR) is a term that can describe non- existent world that was created with the help of electronic devices and the entire industry. Such devices create an illusion that you interact with real objects in the real world, but virtual environment is only generated by a computer and simulated with the help of a VR system. Its history began in the 1960s when the definition of artificial reality was introduced by Myron Krueger. The first VR device was called Sensorama and the first computer- generated virtual space was named Aspen’s Movie Map. Augmented reality (AR) is a result of combining real-world environment with computer-generated one. It alters one’s natural perception and vision, while virtual reality fully simulates it. AR enhances our vision by bringing virtual elements into the real world. The history of AR started in 1990s. First commercial devices were used for entertainment purposes, but huge modern companies are mostly interested in professional gadgets. VR is usually organized in the form of glasses or helmet. Fully simulated environment needs to be rendered with the help of powerful computer and require a lot of wires. A 31 complete environment is obtained with the help of adjustable lenses. They make the picture similar to human vision and increase the viewing angle. AR is represented in the form of HUDs (head-up displays) or smartphone applications. Unlike VR devices, AR ones are independent, i.e. they do not require a computer to work. HUDs put augmented environment directly in front of your face. The device can exist in the form of a helmet or glasses. The main idea of augmented reality is to decrease the amount of hardware for comfortable wearing and using. Applications should be installed on your smartphone before you can use them. Your phone must have a camera to provide the app with raw data. Advanced AR devices also have sensors and multiple cameras on them to define the state and the position of physical objects more correctly. We can find numerous applications of artificial reality. Nowadays, mixed reality is ready to be used in marketing. VR experience is much stronger than traditional one. The experience gained after using VR and AR devices contributes to the formation of company’s image and attracts investments, mass media and clients. Although computer- generated reality is mostly used for entertainment and marketing purposes nowadays, VR/AR devices can find applications in numerous professional spheres. Education is one of the most perspective fields to apply VR/AR products in. Other prospective fields are design, engineering and architecture. Three-dimensional models are much more visually attractive than the projections on blueprints. The usage of VR/AR devices in Belarus is a prospective branch, but nowadays it is poorly developed. It started in 2015, when MSQRD application was developed. Nowadays numerous exhibitions and museums (for example, Belarusian National Historical Museum) use VR and AR devices to complete the event with cutting-edge interactive elements. 32 Mixed reality has some disadvantages. Firstly, good VR/AR devices are quite expensive. Secondly, most AR glasses and helmets need a lot of space for some electronic components, and VR devices need powerful PCs and wires. Thirdly, scientists and psychologists have an ambiguous opinion about the impact of virtual reality on human health. Specialists think that long-term immersion in virtual reality has a very strong influence on our minds. Another group of specialists think that frequent using of VR devices can affect social behavior and make the person addicted to the virtual world. However, VR/AR devices can help people with limited abilities. Microsoft HoloLens is mixed reality eyeglasses by Microsoft. The target audience of Microsoft HoloLens is business, but Microsoft plans to make it widespread in the future. HoloLens is represented in the form of a headband with a head-mounted display. We can control the glasses by some gestures, voice, special clicker or by pressing buttons. HoloLens follow the direction of the user’s eyes to highlight holograms the user is looking at. Software developers can use different APIs and 3D engines to create applications and virtual environment. Oculus Rift is one of the first modern commercial VR kits. It is mostly used to play VR-supported computer games. However, Facebook (current owner of Oculus VR company) will make a version for professional applications too. There are also some analogs of Oculus Rift, e.g., PlayStation VR, HTC Vive, Samsung Gear VR etc. VR box is one of the examples of cheap virtual reality products. It is a good variant to start your acquaintance with VR. You can buy it for about $20-$50. The cheapest device made from cardboard is called Google Cardboard ($1-$5). VR Box looks like ordinary VR glasses, but it doesn’t have a screen. You have to use your smartphone as a screen and as a computer. Google 33 Glass is represented as an optical head-mounted display in the shape of eyeglasses. The device combines the opportunities of AR and Internet communication. It runs Android OS. Software developers are provided with Android API and powerful Google services like Google Maps. Virtual reality gives a lot of possibilities in numerous fields from entertainment and marketing to engineering and education, though it requires a lot of time and resources to create comfortable, relatively cheap and user-friendly devices. 34 УДК 0049(476):811.111 Bobnis U., Kovalikhin A., Bazyleva I. IT Industry of the Republic of Belarus Belarusian National Technical University Minsk, Belarus Digital transformation of all aspects of business is on the agenda in the companies all over the world. Today, three key factors influencing the company’s success and prospects for the future are the ability to create and manage digital technologies, access to technological talents, and the speed and cost of transformation [1]. In the Republic of Belarus the IT industry is quite well developed, but in order to understand it we need to start with the concepts, consider the current state and development of the Belarusian IT industry. Industry is a set of enterprises engaged in the production of tools, extraction of raw materials, fuel, energy production and subsequent processing of products. Information technology (IT industry) is processes, methods of searching, collecting, storing, processing, providing, delivering information and ways of implementing such processes and methods; ways and methods of application of computer facilities in the performance of functions for the collection, storage, processing, transmission and the use of data; resources necessary for collecting, processing, storing and delivering information. In the past few years, Belarus has gained a reputation of the leading IT-country in Eastern European region and in the world. According to the Global Services 100 rating, Belarus ranked 13th among the 20 leading IT outsourcing and high- tech services. In addition, six HTP resident companies have 35 been included in the list of the best providers of outsourcing services, having been included in the 2017 Global Outsourcing 100 rating. These are Bell Integrator, Ciklum, EPAM, IBA Group, Intetics and Itransition. In the UN IT ranking Belarus takes 48th place. Ten companies from the world’s largest software companies rankings Software 500 have development offices in Belarus. These are EPAM (107), Bell Integrator (281), IBA (281), Itransition (368), Coherent Solution (393), SoftClub (409), Artezio (416), Intetics (419), Oxagile (456), IHS (482) [2]. The IT industry plays a key role in the Belarusian economy. It has grown and developed significantly in the last decade. From 2005 to 2016, the export of IT services and products increased by 30%, while the share of IT exports in the total volume of exports of goods and services in Belarus increased from 0.16% to 3.25%. Experts are convinced that the industry has great prospects, and they are increasingly showing interest in our IT companies. Over the past 10 years, the IT industry in Belarus, unlike other sectors of the economy, has shown a steady growth in income, exports, labor and other indicators. The sphere of Information technologies and communications employs more than 85,000 people as a workforce, which includes about 34,000 professionals in the field of IT products and services. Other 30,000 IT professionals work in various economic sectors. There has also been a significant increase in the demand for products and services of Belarusian IT in recent years: more than 90% of sales of Belarusian IT companies are sales to external IT market. The state has a strong influence on industry through local laws that regulate the business environment [1]. The most popular in the IT field are such companies as EPAM, Itransition and Wargaming. In 1993 two classmates Arkady Dobkin and Leonid Lozner created one of the world’s 36 largest software developers and distributors – EPAM Systems. In 1996 a graduate of the Faculty of Applied Mathematics and Informatics of the BSU, a lecturer and HR IBA Group, Sergey Gvardeitsev created the company Itransition. In 1998 a student of the Physics Department of the BSU Viktor Kisly, who was studying lasers and spectroscopy, created the company Wargaming and started developing the first commercial product – game DBA Online. On August 12, 2010 Wargaming.net released a Russian version of the multiplayer online game World of Tanks. The game enjoyed a phenomenal success. In January 2011 the number of World of Tanks’ users was over 1 million people. In February 2011, World of Tanks entered the Guinness Book of Records for the simultaneous presence of users on the game server (91,311 people). In January 2013 World of Tanks set a new world record among all MMO games for the Guinness Book of Records: 190,541 players were fighting simultaneously in tank online battles on one of the five servers of the Russian cluster. Two Israeli businessmen, Talmon Marco and Igor Magazinik, launched the first version of Viber’s pilot application, a competitor to Skype, which allows free communication over the Internet [3]. Today the average salary in the ICT sector is higher than in other sectors of the economy. In 2016 the average salary for Belarus was about $ 400, whereas in the ICT sector an average of $ 1.8 thousand was earned. The average earnings in HTP are expected to be $ 2.4 thousand in 2020 [4]. IT products and services are the fastest growing segment of the economy in terms of revenues and exports. The export of computer services has grown 36 times in 12 years and amounted to 956.8 million in 2016. In Belarus 75,000 students (24% of the total number of university students) study in STEM-specialties, including about 70 IT-specializations. The share of the graduates of the Belarusian State University of Management and Information Systems is 35.4% of employees of HTP resident companies. 37 About 12% of those employed in the IT industry are students. A large number of employees in the IT industry of Belarus have higher education (about 76%). Another characteristic of the sector is the youth: 57% of the staff of HTP resident companies are under 30 years. The career path in the industry usually begins before the age of 25. The share of girls in the IT industry slightly decreased compared to the previous year, nevertheless it accounts for almost a fifth of the total number of employees. Compared to 2010 the number of business people in Belarus has grown 2.5 times. The most popular programming languages are Javascript (57%), SQL (52%), Java (48%), C++ (38%) and Python (18%). Taking into account all these factors, it is certain that the Belarusian IT industry will have a high chance to continue developing and generating revenue. References: 1. The IT industry in Belarus 2017: and Beyond [Electronic resource]. – 2017. – Mode of access: http://www.ey.com/Publication/vwLUAssets/ey-it-industry-in- belarus-2017-and-beyond/$FILE/ey-it-industry-in-belarus- 2017-and-beyond.pdf. – Date of access: 10.04.2018. 2. IT-industry [Electronic resource]. – Mode of access: http://belarusfacts.by/en/belarus/economy_business/key_econo mic/it/ – Date of access: 10.04.2018. 3. История развития ИТ отрасли в Беларуси [Электроннный ресурс]. – Режим доступа: http://itmentor.by/articles/istoriya-razvitiya-it-otrasli-v- belarusi. – Дата доступа: 25.03.2018. 4. ИТ в Беларуси-2016: в индустрии еще никогда не было столько новичков [Электроннный ресурс]. – Режим доступа: https://dev.by/lenta/main/it-v-belarusi-2016. – Дата доступа: 24.03.2018. 38 УДК 669.713:811.111 Guevich M., Beznis Y. Production and Recycling of Aluminium Belarusian National Technical University Minsk, Belarus Even though aluminium is the most common metal on the planet, pure aluminium does not occur naturally. Aluminium atoms easily bind with other metals, forming compounds. At the same time it's impossible to isolate aluminium by simply melting down the compounds in a furnace, as is the case with iron, for example. The aluminium production process is much more complex and requires huge amounts of electricity. For this reason, aluminium smelters are always built in the vicinity of power energy sources, usually hydroelectric power plants that don't contaminate the environment [1]. The aluminium production process can be broken down into three stages; first bauxites, which contain aluminium, are extracted from the ground. Second, bauxites are processed into alumina or aluminium oxide, and finally in stage three, pure aluminium is produced using electrolytic reduction. About 4-5 tons of bauxites get processed into 2 tons of alumina from which about 1 ton of aluminium can be made. There are several minerals available in the world from which aluminium can be obtained, but the most common raw material is bauxite. Bauxite is a mineral made up primarily of aluminium oxide mixed with some other minerals. Bauxite is regarded as high quality if it contains more than 50% of aluminium oxide. There is a lot of variation in bauxites. Structurally they can be solid and dense or crumbly. The usual color is brick red, 39 flaming red or brown because of iron oxide. If iron content is low, bauxite can be grey or white. But yellow, dark green and even multi-colored bauxites with bluish, purple, red and black strains occur too. About 90% of global bauxite supplies are found in tropical and subtropical areas, with 73% found in just five countries: Guinea (having the largest supply), Brazil, Jamaica, Australia and India. The most common way to mine for bauxites is by using open pit mines. Special equipment is used to cut one layer after another off the surface, with the rock then being transported elsewhere for further processing. However, there are places where aluminium ore has to be mined from deep underground which require underground mines to be built to get at it. Pure aluminium oxide, called alumina, is extracted from bauxite via a process called refining, composed of two steps: a digestion process, using caustic soda, which allows the separation of aluminium hydroxide from the so-called bauxite residue, followed by a calcination step which removes the water content in the hydroxide. Both the aluminium hydroxide and the aluminium oxide have further applications outside of the metal industry [2]. In 1886, two 22-year-old scientists on opposite sides of the Atlantic, Charles Hall of the USA and Paul L.T. Heroult of France, made the same discovery – molten cryolite (a sodium aluminum fluoride mineral) could be used to dissolve alumina and the resulting chemical reaction would produce metallic aluminum. The Hall-Heroult process remains in use today. The Hall-Heroult process takes place in a large carbon or graphite lined steel container called a reduction pot. In most plants, the pots are lined up in long rows called potlines. The key to the chemical reaction necessary to convert the alumina to metallic aluminum is the running of an electrical current through the cryolite/alumina mixture. 40 The process requires the use of direct current (DC) – not the alternating current (AC) used in homes. The electrical voltage used in a typical reduction pot is only 5.25 volts, but the amperage is very high – generally in the range of 100,000 to 150,000 amperes or more. The current flows between a carbon anode (positively charged), made of petroleum coke and pitch, and a cathode (negatively charged), formed by the thick carbon or graphite lining of the pot. When the electric current passes through the mixture, the carbon of the anode combines with the oxygen in the alumina. The chemical reaction produces metallic aluminum and carbon dioxide. The molten aluminum settles to the bottom of the pot where it is periodically syphoned off into crucibles while the carbon dioxide – a gas – escapes. Very little cryolite is lost in the process, and the alumina is constantly replenished from storage containers above the reduction pots [3]. The metal is now ready to be forged, turned into alloys, or extruded into the shapes and forms necessary to make appliances, electronics, automobiles, airplanes, cans and hundreds of other familiar, useful items. Aluminum is formed at about 900°C, but once formed has a melting point of only 660°C. In some smelters this spare heat is used to melt recycled metal, which is then blended with the new metal. The smelting process required to produce aluminum from the alumina is continuous, the potline is usually kept in production 24 hours a day year-round. A smelter cannot easily be stopped and restarted. If production is interrupted by a power supply failure of more than four hours, the metal in the pots will solidify, often requiring an expensive rebuilding process. The cost of building a typical, modern smelter is about $1.6 billion [3]. Globally, the aluminum industry annually emits millions of tons of greenhouse gases such as carbon dioxide, which contributes to global warming. Although aluminum cans 41 represent only 1.4 percent of a ton of garbage by weight, according to the Container Recycling Institute, they account for 14.1 percent of the greenhouse gas impacts associated with replacing an average ton of garbage with new products made from virgin materials [4]. Aluminum smelting also produces sulfur oxide and nitrogen oxide, two toxic gases that are key elements in smog and acid rain. In addition, every ton of new aluminum cans that must be produced to replace cans that were not recycled requires five tons of bauxite ore, which must be strip-mined, crushed, washed and refined into alumina before it is smelted. That process creates about five tons of caustic mud that can contaminate both surface water and groundwater and, in turn, damage the health of people and animals. There is no limit to how many times aluminum can be recycled. That's why recycling aluminum is such a boon for the environment. Aluminum is considered a sustainable metal, which means it can be recycled again and again with no loss of material. Aluminum recycling provides many environmental, economic and community benefits; it saves energy, time, money and precious natural resources; and it generates jobs and helps to pay for community services that make life better for millions of people. References: 1. Mode of access: https://aluminiumleader.com/production/how _aluminium_is_produced. – Date of access: 23.02.2018. 2. Mode of access: https://www.european-aluminium.eu. – Date of access: 10.03.2018. 3. Mode of access: https://rocksandminerals.com/MineralInfor mation/Aluminum. – Date of access: 27.02.2018. 4. Mode of access: https://www.thoughtco.com/the-benefits- of-aluminum-ecycling-1204138. – Date of access: 01.03.2018. 42 УДК 629.5:811.111 Podgorny A., Rachko E., Beznis Y. Welding Manipulators in Shipbuilding Belarusian National Technical University Minsk, Belarus A manipulator is a device used to manipulate materials without direct contact. Its applications were originally for dealing with radioactive or biohazardous materials, using robotic arms, or they were used in inaccessible places. In more recent developments they have been used in diverse range of applications including welding automation, robotically-assisted surgery and in space. A manipulator is an arm-like mechanism that consists of a series of segments, usually sliding or jointed called cross-slides, which grasp and move objects with a number of degrees of freedom [1]. Manipulators are designed, constructed and developed by the science, called Mechatronics that is the synergistic integration of mechanical, electrical and computer systems. With the help of Robotics (the application of mechatronics to create robots which are often used in industry to perform tasks), manipulators can do different work much better than human operatives [2]. In industrial ergonomics a manipulator is a lift assist device used to help workers lift, maneuver and place articles in process that are too heavy, too hot, too large or otherwise too difficult for a single worker to manually handle. As opposed to simply vertical lift assists (cranes, hoists, etc.) manipulators have the ability to reach in to tight spaces and remove workpieces. A good example would be removing large stamped parts from a press and placing them in a rack or similar dunnage. In welding, a column boom manipulator is 43 used to increase deposition rates, reduce human error and other costs in a manufacturing setting. Additionally, manipulator tooling gives the lift assist the ability to pitch, roll, or spin the part for appropriate placement. An example would be removing a part from a press in the horizontal and then pitching it up for vertical placement in a rack or rolling a part over for exposing the back of the part [1]. A welding manipulator can be either open arc or submerged arc. A welding manipulator can be used to weld horizontally and vertically and is ideal for job shops as they are robust, have high production volume capacity and a greater degree of flexibility in product engineering. Welding manipulators are commonly used in pipe and vessel fabrication [3] but can be also used in a cladding procedure with the aid of a proper welding fixture. Ship building automatic welding. In today’s demanding and competitive ship building and repair industry, new technology is very much needed and automation plays a key role in improving the productivity and quality of shipyards. Welding is a fundamental task in shipyards and marine/offshore companies. Robotic welding is very attractive because of its robustness and manipulability and has been recognized as the next step in technological advancement of shipyards. There are many commercially available robotic welding systems that have been applied to shipyards, the robotic system of Odense Steel Shipyard Ltd in Denmark being one of the most notable. This robotic welding system is integrated into a CAD system and robots are programmed offline. Offline programming systems require the availability of CAD data describing the workpieces to be welded. A model of the robot and the welding process is then simulated in the computer together with a CAD model of the workpieces and environment. With a simulation environment, the robot program can be developed offline and tested before it is 44 downloaded or implemented in the actual robot. However, such system has got a number of disadvantages. Offline robot programming systems require an accurate description of the workpieces and layout of the environment. Robotic welding systems are very complicated to use, they require a robot programmer and/or application engineer which shipyards do not normally have. Also, CAD data of plates, webs, stiffeners are not available. Part geometries are only available in manual drawings and this makes off-line programming technique for robot teaching not applicable. Another problem is the workpieces are very large. The robotic system SWERS (Ship Welding Robotic System) developed by the National University of Singapore is based on a completely new approach to robotics [4]. SWERS includes a special teaching procedure that allows the human user to teach the robot welding paths at a much easier and faster pace [4]. A 6-axis force-torque sensor is mounted on the welding torch through a custom-built walk-through teaching (WTT) handle. The operator grasps the WTT handle and moves the welding torch naturally to position it in the required welding positions. The sensor senses the force and moments exerted by the operator's hand. The controller then commands the robot to move in response to the sensed forces. To achieve this, it is important to be able to control the dynamic behavior of the robotic manipulator, or to control the impedance of the manipulators. The biggest advantage of SWERS is the easier and faster operation compared to a conventional robotic system. With the implementation of the new Walk-Through Teach method, robot can now be used for the panel line for a faster welding time as shown in the welding tests. For a specific welding length of 1m, the total operation time including robot teaching time of SWERS is 5% faster than manual welding. It will be even faster if the workpiece have repeated patterns because the Teach-Weld-Weld mode can be used. 45 As the SWERS is so easy to operate, the training time required to use the system is much shorter than for a conventional robotic system. Apart from the improvement of welding cycle time, the welding quality of the robotics system is better than manual arc welding due to the implementation of optimized welding parameters on the system and non-stop welding lines. Besides, the precise motion of the robot in addition to the arc sensing for search and tracking the seams has also contributed for better welds. Since the actual welding is handled by the robot, the operator can now stay away from the fumes and heat generated by the welding. A less hazardous and better working condition for human is another advantage of using this robotic system. By incorporating a force-torque sensor together with powerful algorithms, a new way of robot teaching method is implemented and proved useful for the automated welding in shipyards. The custom design man- machine interface is crucial for the operation of the robotic system for the complicated welding operation [4]. References: 1. Mode of access: https://en.wikipedia.org/wiki/Manipulator_(device). – Date of access: 15.02.2018. 2. Mode of access: https://www.booksee.org/book/515530. – Date of access: 25.02.2018. 3. Mode of access: https://www.thefabricator.com/article/arc- welding/thinking-about-submerged-arc-welding. – Date of access: 01.03.2018. 4. Mode of access: https://www.researchgate.net/publication/ 233708454_Walk-through-programmed-robot-for-welding-in- shipyards. – Date of access: 01.03.2018. 46 УДК 662.74: 811.111 Achinovich V., Barankevich N., Beznis Y. Coke Production for Blast Furnace Ironmaking Belarusian National Technical University Minsk, Belarus A world class blast furnace operation demands the highest quality of raw materials, operation, and operators. Coke is the most important raw material fed into the blast furnace in terms of its effect on blast furnace operation and hot metal quality [1]. Coke is basically a strong, non–melting material which forms lumps based on a structure of carbonaceous material internally glued together. The average size of the coke particles is much larger than that of the ore burden materials and the coke will remain in a solid state throughout the blast furnace process [2]. A high quality coke should be able to support a smooth descent of the blast furnace burden with as little degradation as possible while providing the lowest amount of impurities, highest thermal energy, highest metal reduction, and optimum permeability for the flow of gaseous and molten products. Introduction of high quality coke to a blast furnace will result in lower coke rate, higher productivity and lower hot metal cost. The cokemaking process involves carbonization of coal to high temperatures (1100°C) in an oxygen deficient atmosphere in order to concentrate the carbon. The commercial cokemaking process can be broken down into two categories: by-product cokemaking and non-recovery/heat recovery cokemaking. The majority of coke produced in the world comes from wet-charge, by-product coke oven batteries [1]. The entire cokemaking operation is comprised of the following steps: before carbonization, the selected coals from specific 47 mines are blended, pulverized, and oiled for proper bulk density control. The blended coal is charged into a number of slot type ovens wherein each oven shares a common heating flue with the adjacent oven. Coal is carbonized in a reducing atmosphere and the off-gas is collected and sent to the by- product plant where various by-products are recovered. The coal-to-coke transformation takes place as follows: The heat is transferred from the heated brick walls into the coal charge. From about 375°C to 475°C, the coal decomposes to form plastic layers near each wall. At about 475°C to 600°C, there is a marked evolution of tar, and aromatic hydrocarbon compounds, followed by resolidification of the plastic mass into semi-coke. At 600°C to 1100°C, the coke stabilization phase begins. This is characterized by contraction of coke mass, structural development of coke and final hydrogen evolution. During the plastic stage, the plastic layers move from each wall towards the center of the oven trapping the liberated gas and creating in gas pressure build up which is transferred to the heating wall. Once, the plastic layers have met at the center of the oven, the entire mass has been carbonized [1]. The incandescent coke mass is pushed from the oven and is wet or dry quenched prior to its shipment to the blast furnace. In non-recovery coke plants, originally referred to as beehive ovens, the coal is carbonized in large oven chambers [3]. The carbonization process takes place from the top by radiant heat transfer and from the bottom by conduction of heat through the sole floor. Primary air for combustion is introduced into the oven chamber through several ports located above the charge level in both pusher and coke side doors of the oven. Partially combusted gases exit the top chamber through down comer passages in the oven wall and enter the sole flue, thereby heating the sole of the oven. Combusted gases collect in a common tunnel and exit via a stack which creates a natural 48 draft in the oven. Since the by-products are not recovered, the process is called non-recovery cokemaking. In one case, the waste gas exits into a waste heat recovery boiler which converts the excess heat into steam for power generation; hence, the process is called heat recovery cokemaking. High quality coke is characterized by a definite set of physical and chemical properties that can vary within narrow limits. The coke properties can be grouped into physical properties and chemical properties. Measurement of physical properties aids in determining coke behavior both inside and outside the blast furnace. In terms of coke strength, the coke stability and coke strength after reaction with CO2 (CSR) are the most important parameters. The stability measures the ability of coke to withstand breakage at room temperature and reflects coke behavior outside the blast furnace and in the upper part of the blast furnace. CSR measures the potential of the coke to break into smaller size under a high temperature CO/CO2 environment that exists throughout the lower two- thirds of the blast furnace. A large mean size with narrow size variations helps maintain a stable void fraction in the blast furnace permitting the upward flow of gases and downward of molten iron and slag thus improving blast furnace productivity. The most important chemical properties of coke are moisture, fixed carbon, ash, sulfur, phosphorus, and alkalies. Fixed carbon is the fuel portion of the coke; the higher the fixed carbon, the higher the thermal value of coke. The other components such as moisture, ash, sulfur, phosphorus, and alkalies are undesirable as they have adverse effects on energy requirements, blast furnace operation, hot metal quality, and/or refractory lining [1]. For blast furnace ironmaking the most important functions of coke are: to provide the structure through which gas can ascend and be distributed through the burden; to generate heat to melt the burden; to generate reducing gases; 49 to provide the carbon for carburization of the hot metal and to act as a filter for soot and dust [2]. A good quality coke is generally made from carbonization of good quality coking coals. Coking coals are defined as those coals that on carbonization pass through softening, swelling, and resolidification to coke. One important consideration in selecting a coal blend is that it should not exert a high coke oven wall pressure and should contract sufficiently to allow the coke to be pushed from the oven. The properties of coke and coke oven pushing performance are influenced by following coal quality and battery operating variables: rank of coal, petrographic, chemical and rheologic characteristics of coal, particle size, moisture content, bulk density, weathering of coal, coking temperature and coking rate, soaking time, quenching practice, and coke handling. Coke quality variability is low if all these factors are controlled. Coke producers use widely differing coals and employ many procedures to enhance the quality of the coke and to enhance the coke oven productivity and battery life [1]. References: 1. Mode of access: http://www/steel/org/steel-technology/how- its-made/processes/processes-info/coke-production-for-blast - furnace-ironmaking.aspx?siteLocation=88e232e1-d52b- 4048- 9b8a-f687fbd5cdcb. – Date of access: 20.02.2018. 2. Mode of access: http://allaboutmetallurgy.com/wp/wp- content/uploads/2016/12/Modern-Blast-Furnace-Ironmaking- an-Introductio-001-2.pdf. – Date of access: 10.03.2018. 3. Mode of access: https://www.ifc.org/wps/wcm/connect/9eca b70048855c048ab4da6a6515bb18/coke_PPAH.pdf?MOD=AJ PERES. – Date of access: 12.03.2018. 50 УДК 004.41:811.111 Borodach V., Vasilenya M., Beznis Y. Software. Notion and Development Belarusian National Technical University Minsk, Belarus A software-based system can be neatly compared with a biological entity called a superorganism. Comprising software, hardware, peopleware and their interconnectivity (such as the Internet), and requiring all to survive, the silicon superorganism is itself a part of a larger superorganism [1]. Whether that business is government, academic, or commercial, the software-based system, like its biological counterpart, must grow and adapt to meet rapidly changing requirements. Compared to a biological superorganism, which may take many generations to effect even a minor hereditary modification, software can be modified immediately. This makes it far superior in this respect to the biological entity in terms of its evolutionary adaptability. Software, the brain of the silicon superorganism, controls the action of the entire entity. Software is the embodiment of logical processes, whether in support of business functions or in control of physical devices. The nature of software as an instantiation of process can apply very broadly, when modeling complex organizations, or very narrowly as when implementing a discrete numerical algorithm. Software has a potentially wide range of application, and that well designed has a potentially long period of utilization [2]. While some would define software as solely the code that a programming language generates from the compilation process, a broader and more precise definition includes requirements, specifications, designs, program listings, 51 documentation, procedures, rules, measurements, and data as well as the tools used to create, test, optimize, and implement the software [1]. Software at the lowest programming level is termed a source code. This differs from an executable code (i.e., which can be executed by the hardware to perform one or more specified functions) in that software is written in one or more programming languages and cannot, by itself, be executed by the hardware. A programming language is a set of words, letters, numerals, and abbreviated mnemonics, regulated by a specific syntax, used to describe a program to a computer. There are a wide variety of programming languages, many of them tailored for a specific type of application. C, one of today’s more popular programming languages, is used in engineering as well as business environments while object- oriented languages such as C ++ and Smalltalk have been gaining acceptance in both of these environments. The programming language, whether it be C++, Java, Visual BASIC, C, FORTRAN, HAL/s, COBOL, or something else, provides the capability to code such logical constructs as that having to do with: user interface, model calculations, program control, message processing, database, data declaration, simulation, tools and some other. As a base unit, a line of code can be joined with other lines of code to form many things. In a traditional software environment many lines of code form a program, sometimes referred to as an application program or just plain application. But lines of source code by themselves cannot be executed. First, source code must be run through what is called a compiler to create an object code. Next, the object code is run through a linker which is used to construct an executable code. Compilers are programs themselves. Their function is twofold. The compiler first checks the source code for obvious syntax errors and then, if it finds none, creates object code for a 52 specific operating system. UNIX, Linux (a spinoff of UNIX), and NT are all examples of operating systems. An operating system can be thought of as a supervising program that controls the application programs that run under its control. Since operating systems (as well as computer architectures) can be different from each other, the object code resulting from the source code compiled for one operating system cannot be executed under a different kind of operating system – without a recompilation [1]. Solving a complex business or engineering problem often requires more than one program. One or more programs that run in tandem to solve a common problem are known collectively as a system. By combining objects it is possible to create more organized systems than those created by traditional means. Software development becomes a speedier and less error-prone process as well. Since objects can be reused, once tested and implemented, they can be placed in a library for other developers to reuse. The more objects in the library, the easier and quicker it is to develop new systems. The process of writing programs and/or objects is known as software development, or software engineering. It is composed of a series of steps or phases, collectively referred to as a development life cycle. The phases include the following: an analysis or requirements phase, where the business problem is dissected and understood; a specification phase, where decisions are made as to how the requirements will be fulfilled; a design phase; an implementation or programming phase, where one or more tools are used to write and/or generate code; a testing phase, where the code is tested against a business test case and errors in the program are found and corrected; an installation phase, where the systems are placed in production; and a maintenance phase, where modifications are made to the system. But different people develop systems in different ways. 53 These different paradigms make up the opposing viewpoints of software engineering [3]. A new approach to software engineering is known as development before the fact (DBTF) which includes a technology, a language, and a process (or methodology). With DBTF all aspects of system design and development are integrated with one systems language and its associated automation. Reuse naturally takes place throughout the life cycle. Objects, no matter how complex, can be reused and integrated. Environment configurations for different kinds of architectures can be reused. A newly developed system can be safely reused to increase even further the productivity of the systems developed with it. The paradigm shift occurs once a designer realizes that many of the old tools are no longer needed to design and develop a system. For example, with one formal semantic language to define and integrate all aspects of a system, diverse modeling languages (and methodologies for using them), each of which defines only part of a system, are no longer necessary. There is no longer a need to reconcile multiple techniques with semantics that interfere with each other. DBTF can support a user in addressing many of the challenges presented in today’s software development environments [1]. References: 1. Mode of access: http://www.sze.hu/~szenasy/Szenzorok %20%E9s%20aktu%E1torok/Szenzakt%20jegyzetek/Mech atronics_handbook%5B1%5D.pdf. – Date of access: 15.02.2018. 2. Mode of access: https://en.wikipedia.org/wiki/Software_design. – Date of access: 25.02.2018. 3. Mode of access: https://sea.ucar.edu/best-practices/design. – Date of access: 22.02.2018. 54 УДК 004.352:811.111 Silich V., Boyarskaya A. Three-dimensional Machine-vision Measurement System Belarusian National Technical University Minsk, Belarus A machine-vision method was used to build a three- dimensional measurement system using a measurement algorithm and a perspective transformation. A three- dimensional measurement system for obtaining its feature points in the world coordinate system was used to calculate the measurement data. The experimental results were verified with a more precise measurement equipment, automatic transformer observation system. With the rapidly growing demand for industrial automation in the manufacturing sector, machine vision now plays an important role in many fields. Machine- vision technology is quickly becoming a widely applied micrometer, inside micrometer, vernier caliper, coordinate measuring machine (CMM), which all require direct physical contact. The advantages of a contact measurement are found in the high measurement method toward the quality inspection of a wide variety of products. Geometric and size measurements are among the essential quality control processes that are performed to ensure that manufactured parts conform to specified standards in mechanical engineering. This type of inspection is normally done through the use of specialized instruments, such as a steel rule, accuracy and the general suitability for basic quantitative geometries. However, most contact measurement methods are usually limited by the size of the analysis and the high cost involved with time-consuming skilled labor. These drawbacks may be overcome by implementing a non-contact measurement 55 method, such as the use of laser measurement devices, ultrasonic measurement methods, machine-vision systems, automatic transformer observation system (ATOS) scanning measurement equipment. The machine-vision method is based on the human visual system which can detect the dimensions of objects by means of light passing through an individual’s cornea, pupil, and lens and then projecting images onto the retina. Then, the visual signals received through the optic nerve can pass into the brain. The analysis and integration within the brain can ascertain depth perception of those objects. The stereoscopic vision system of the human body can thereby determine the relative and absolute distance of observed objects, and even the thickness of the objects, as well as other features. With the machine-vision method, the visual information is transmitted to a personal computer (PC) through the signal line of a mainframe computer, and then the spatial position of the object to be measured; it is calculated according to its location in the world coordinate system. A machine-vision system generally consists of five basic components: a light source, an image capturing device, an image capturing board (frame grabber), and an appropriate computer hardware and software system. In recent years, many authors have studied using machine vision in many fields, such as agriculture, manufacturing, and medical-related sciences. In addition, machine vision has been used to control the quality of products, for example, in estimating classifications of surface roughness, and in measuring hot-formed products. In agriculture, it has been used to detect defective eggs and fruit, as well as plant diseases. Within machine-vision technology, the performance largely depends on calibration accuracy. Machine vision is used to establish a non-contact 3D measurement system using a measurement algorithm and a perspective transformation 56 method. Double CMOS cameras are used to capture the images of the objects. A real pattern is used to calibrate the coordinates. After capturing the images of the objects and calibrating the camera, a linear transformation between the image coordinate system and the world coordinate system is performed, thereby determining the real-world dimensions of the objects. In summary, the experimental results have shown that the 3D measurement system is suitable for measuring the dimensions of various objects having complex geometries and oriented at oblique angles. 57 УДК 656.13.072/073:811.111 Pavlov V., Lameko P., Boyarskaya A. World-wide Application of the TIR System Belarusian National Technical University Minsk, Belarus TIR is a tried and tested tool that facilitates trade to drive global growth and inclusive development. It is an excellent solution for the digital economy and is used every day by thousands of transport and logistics companies, drivers, and customs officials. The TIR system is promoted under the auspices of the United Nations to make it as widely available as possible for all countries wishing to make use of it. In 1984, the Economic and Social Council of the United Nations (ECOSOC) adopted a Resolution which recommends that countries world-wide examine the possibility of acceding to the Convention and introducing the TIR system. Furthermore, it recommends that international, intergovernmental and non- governmental organizations, and in particular the Regional Commissions of the United Nations, promote the introduction of the TIR system as a universal Customs transit system. It is the key to faster border crossings for truck drivers, which means lower costs for transport and logistics companies and customs authorities. TIR also directly contributes to implementing key goals of the World Trade Organization’s Trade Facilitation Agreement (TFA) such as measures to enhance transparency, clearance of goods, freedom of transit and customs cooperation, and the publication and availability of information. Main principles of TIR system are: secure vehicles or containers; international guarantee; TIR carnet; mutual 58 recognition of Customs controls; controlled access; delivery safety. In light of the expected increase in world trade, further enlargement of its geographical scope and the introduction of an electronic TIR system (so-called eTIR-system), it is expected that the TIR system will continue to remain the only truly global customs transit system. The eTIR project aims towards the full computerization of the TIR system. Data exchange platform is available for all actors involved in the TIR system. Secure exchange of data between national Customs systems is related to the international transit of goods under TIR Convention. The eTIR project allows Customs to manage the data on guarantees, issued by guarantee chains to holders. The TIR Convention also contains specific technical requirements for the construction of the load compartments of vehicles or containers, in order to avoid smuggling. In addition, only carriers authorized by customs are allowed to transport goods under the TIR procedure. To cover the customs duties and taxes at risk throughout the journey, the Convention has established an international guaranteeing chain which is managed by the International Road Transport Union (IRU). IRU is also responsible for the printing and distribution of the so-called TIR Carnet, which serves both as international Customs document and proof of guarantee [1]. Many countries in Africa, Asia, the Middle East and South America are looking to join TIR to experience the benefits it has brought to Europe and Central Asia over the last seven decades. TIR now connects more continents and countries than ever. India’s Cabinet has approved the country’s accession to the UN TIR Convention, the global standard for international freight customs transit. This milestone decision will facilitate goods transport and transit, putting India and her neighbours at 59 the centre of efforts to boost overland trade and regional integration across South Asia and beyond. In the light of the recent Motor Vehicles Agreement to improve cross-border transport between Bangladesh, Bhutan, India and Nepal, the government’s decision on TIR will fast- track the region’s potential to become a productive trade hub. TIR will also be critical in helping India implement the World Trade Organization’s Trade Facilitation Agreement, which entered into force last month. The streamlined international system for the movement of goods by road and other modes will, in particular, enhance India’s International North-South Transport Corridor, a key trade route between Central Asia and the Commonwealth of Independent States in the north, and southern ports in India and beyond, such as Chabahar in Iran. Qatar has become the 73rd country to ratify the United Nations’ TIR Convention, the global standard for customs transit, to facilitate trade and the seamless and secure movement of goods across its borders. Qatar’s ratification is an important milestone for improving road and multimodal transport in the region, and a sign of the country’s integration into global transport and trade norms. The General Authority of Customs has officially nominated IRU’s member, Qatar Chamber of Commerce and Industry, as the TIR national guaranteeing and issuing association in the State of Qatar. Due to the large blue-and-white TIR plates carried by vehicles using the TIR convention, the word TIR entered many languages as a neologism, becoming the default generic name of a large truck. References: 1. Mode of access: http://www.unece.org/tir/about.html. – Date of access: 13.03.2018. 60 УДК 629.33.03-83:811.111 Nikitina M., Yurko E., Boyarskaya A. Green Transportation Belarusian National Technical University Minsk, Belarus Transportation is one aspect we cannot do without. However, the current transportation systems come along with a wide range of problems including global warming, environmental degradation, health implications (physical, emotional, mental, spiritual), and emission of greenhouse gases. In fact, the transport sector attributes to 23% of the globe’s greenhouse gas emission resulting from burning of fossil fuels. Out of the total greenhouse gas emissions, road transport takes up a lion share, 75% and this trend is projected to increase in the future. Transportation is the major contributor to greenhouse gas emission. The immediate and obvious solution to this environmental pollution is greening of the transport sector, which suggests any sort of transportation vehicle or transportation habit that is environmentally friendly and doesn’t emit toxic gasses that could impact the environment and human health. This leads to Green Transportation, which means any kind of transportation practice or vehicle that is eco- friendly and does not have any negative impact on the immediate environment. Green transportation revolves around efficient and effective use of resources, modification of the transport structure and making healthier travel choices, innovation and production of vehicles that utilize renewable sources of energy such as wind, solar, biofuels and hydroelectricity. 61 Modes of Green Transportation The existing modes of transportation require enormous amounts of energy, for example, fossil fuels to power vehicles on the roads. Promising innovative technologies could be the ultimate solution, but before such innovations come to fruition, the world can play a significant role by utilizing eco-friendly modes of transportation. Being a responsible citizen one should opt for green transportation that is easily accessible to everyone. Some of the modes of green transportation are available nowadays. Electric bikes. Electric bikes are great modes of green transportation, because they don’t release any harmful emission into the environment. The speed of electric bikes is greatly regulated by law, you must have a special registration, license, and insurance to be able to ride an electric bike. Electric vehicles. Some kinds of electric vehicles include cars, motorcycles, lorries, trains, boats, and scooters. Electric vehicles powered entirely by electricity do not emit any dangerous gasses, even though the toxic emissions might be produced by plants generating the electricity. Still, the power can be tapped from renewable technologies like geothermal, hydroelectric, solar power and wind turbines. Green trains. The innovative hybrid locomotives utilize similar technologies applied in hybrid cars. The modern electric trains make use of electrified third rail, overhead lines or devices that store up energy like fuel cells and batteries. The advantage of these electric trains is that they travel at tops speeds of more than 200 mph, yet maintaining high levels of safety. Electric motorcycles. Like other electric vehicles, electric motorcycles do not give off emissions. They are typically battery powered. Some top range motorcycles even have their parts designed from recycled materials. Experts are projecting that they may be mass-produced in the near future. 62 Multiple occupant vehicles. The explosion of vehicles around the world has been due to the booming world economy. Multiple occupant vehicles, also referred to as carpools, reduce the number of vehicles on roads, hence, minimizing levels of pollution. Multiple occupant vehicles are very eco-friendly and favorable mode of green transportation. Instead of 5 individuals driving their own cars in similar direction, it’s a lot more economical (saves money and fuel) and ecologically sensible to make use of a single car to take all of you to the destination. Service and freight vehicles. These kinds of vehicles attribute to about 9% of the total toxic gas emissions. Utilizing electricity and biofuels instead of the regular fossil fuel sources in services and freight vehicles, administering travel demands and offering many travel alternatives will go a long way towards aligning the transportation sector to conform to green transportation. Hybrid cars. Hybrid cars also rely on electricity. A vast majority of hybrid cars are designed to automatically recharge their batteries by converting energy in the course of braking. Greenhouse emissions in hybrid cars are extremely low; emissions can range from 26% – 90% lower compared to standard cars. According to experts, hybrid cars cut down health-threatening emissions by over 90%. While hybrid cars contribute little to no greenhouse emissions, they lack in some areas. The batteries have some environmental impacts. Green transportation has wide-ranging benefits – environmental, health, economic and individual budgets. Some of the key benefits of using green transportation are: Fewer to no environmental pollution The existing modes of transportation utilize sources of energy such as fossil fuels, which emit vast quantities of greenhouse gases to the environment. Shifting to green transportation would help rid the atmosphere of these toxic 63 gases since these modes of transportation have few to zero emissions. Contribute to building of a sustainable economy Manufacturing and distribution of green vehicles will go along with improving existing transport systems. This will lead to creation of more jobs in the transport sector, hence, minimizing social-economic disparities and building up a sustainable economy. It will also minimize over-reliance on fossil fuels, which drain an economy. Improved health Energy sources from fossils fuels like natural gas, coal, and oil emit toxic gases that negatively affect our health. In fact, these gasses have been associated with rising cases of cancer and other cardiovascular diseases. The emissions produced by green vehicles are not harmful to human health, so embracing green transportation will only improve a country’s health status. Saves your money Embracing green transportation modes like bicycles, multiple occupant cars, and electric motorcycles will save you a lot of out-of-pockets costs related to buying fossils fuels at the pump. There are many other benefits associated with green transportation, which will enhance healthier lifestyle and improve quality of human life. It’s a difficult task to convince the entire population to change up to green transportation, but with significant steps underway, the future of green transportation is bright [1]. References: 1. Mode of access: ttps://www.conserve-energy- future.com/modes-and-benefits-of-green-transportation.php. – Date of access: 12.03.2018. 64 УДК 621.869.888+629.35 Nemchenko A., Boyarskaya A. Current Trends in Container Shipping Industry Belarusian National Technical University Minsk, Belarus What is it about the container that is so important? Surely not the thing itself. The value of this utilitarian object lies not in what it is, but in how it is used. The container is at the core of a highly automated system for moving goods from anywhere to anywhere, with a minimum of cost and complication. How much the container matters to the world economy is impossible to quantify, but clearly the container reduced the cost of moving freight. In 1966, in the decade after the container first came into international use, the volume of international trade in manufactured goods grew more than twice as fast as the volume of global manufacturing production, and two-and-a-half times as fast as global economic output. When containers were gaining share from breakbulk (noncontainerized) cargo, container trade could grow much faster than overall trade. However, the containerization ratio – a measure of seaborne cargo transported in containers – has stabilized at 13 percent since the financial crisis. Some sectors (such as electronics, medicines, and apparel) are entirely containerized; others seem stuck somewhere in the midrange; for instance, the containerization ratios for automobiles and for nonrefrigerated agricultural goods – 25 percent and 12 percent, respectively – have remained more or less static for the past decade. In the absence of tail-winds, achieving container-trade growth that’s higher than the growth of GDP and overall trade is harder than ever. 65 A number of interlocking trends are driving the slowdown in the multiplier – the multiple of container-trade growth over GDP growth [1]: Growth in emerging markets China became the world’s factory, producing ever-larger shares of global manufacturing output and absorbing enormous amounts of natural resources and intermediate goods. The container-shipping industry supported much of this trade: in 2015, China imported and exported 52 million 20-foot equivalent units, a fourfold increase on the 13 million twenty- foot equivalent units (TEUs) of 2000.China is now moving away from a development model based on investment and the export of goods and toward a consumption- and services-based model. Its annual real GDP growth has fallen from more than 10 percent to 6–7 percent, and its trade in goods with the rest of the world has slackened, as well. Changing manufacturing footprints Today’s manufacturing sector is in a state of flux as the growing use of digitally enabled technologies (such as advanced robotics and 3-D printing) starts to change the regions where production takes place. According to some analysts, a wave of reshoring is imminent as new manufacturing technologies displace labor. However, labor costs are not the sole determinant of manufacturing locations. In fact, sectors in which labor costs are the main driver of location decisions produced only 13 percent of TEUs in 2015. Over half – 55 percent – came from sectors (such as chemicals, food processing, pulp and paper, plastics, and rubber) that treat access to affordable raw materials as a more pressing consideration. One technology in particular – 3-D printing – could have a novel impact on trade volumes, but not by precipitating a mass localization of production. With this technology, objects are made by adding layers, thus minimizing waste, instead of 66 by milling down materials. As 3-D printing gets cheaper, faster, and more compatible with metals, ceramics, and other materials, its increasing use may affect trade in raw materials for manufacturing. At the moment, though, the impact is expected to be marginal: one analysis estimates that TEU volumes will fall less than 1 percent by 2035. Dematerialization of demand As societies get wealthier, they gradually saturate their demand for goods, and demand for services tends to take over. The global rise in incomes thus has two countervailing effects: on the one hand, expanding the consuming class and, on the other, dematerializing its consumption. Of these two effects, we have reason to believe that dematerialization is gradually winning out. First, China is already evolving toward services- led consumption. Second, incomes are growing in Africa, India, and Latin America more slowly than they did in China over the past three decades, muting the goods-intensive phase of development in these other regions. Third, technology is both miniaturizing products (a smartphone replaces, among other things, a camera, a map, a flashlight, a calculator, a newspaper, and a telephone) and promoting services (say, taking an Uber) at the expense of goods (buying a car). Uncertainties in geopolitics and policy The geopolitical and policy environment is now somewhat precarious: a quarter-century of globalization, carried along by a steady stream of trade deals, has stalled. Many such deals remain on the agendas of political leaders, but the future is uncertain. Taken together, these trends will probably slow down the growth of container trade. So what can we expect in the next five decades? An optimist might envision a world where India reaches an escape velocity growth rate by improving infrastructure, reforming markets, and liberalizing trade 67 barriers – integrating more than one billion people into the global economy and its supply chains. In that scenario, manufacturers would enjoy a new round of labor-cost savings and start a second wave of offshoring, this time from East Asia to India. Robotics and 3-D printing wouldn’t localize most production but rather supplement existing supply chains and create new ones, as Align Technology, for example, does by 3-D printing dental products in Mexico and shipping them to the United States, Europe, and other markets. For the pessimist, on the other hand, China’s achievements over the past three decades probably won’t be repeated elsewhere. Many supply chains would retrench – nearshoring – as new technologies made labor costs less relevant. Geopolitics might also intervene: tensions between great powers could create incentives to keep suppliers close. Some argue that these trends, in combination, could force global trade into a structural decline. Economic growth goes hand in hand with specialization, which in turn promotes further trade. So long as underlying economic growth is positive, trade too is likely to grow – even if the multiplier is less than one. The real impact may be to shorten the distance between trading partners, thereby limiting the growth of long- distance international trade. The optimistic and pessimistic views concur that container trade will continue to grow; peak container isn’t on the horizon. Indeed, the flexibility of the container trade makes it resilient: one product may go out of fashion but another will come along to fill the box. References: 1. Saxon, S. Container shipping: The next 50 years / S. Saxon, M. Stone. –Travel, Transport & Logistics. – 2017. 68 УДК 811.111: 621.311 Sidorova D., Bozhko Y., Vanik I. The Prospects of Smart Grid in Belarus Belarusian National Technical University Minsk, Belarus The Belarusian economy, national security and even the health and safety of our citizens depend on the reliable delivery of electricity. The electric grid is more than just generation and transmission infrastructure. It is an ecosystem of asset owners, manufacturers, service providers, and government officials at state and local levels, all working together to run electrical grids. Our electric infrastructure is aging and it is being pushed to do more than it was originally designed to do [1]. Modernizing the grid to make it smarter and more resilient through the use of cutting-edge technologies, equipment, and controls that communicate and work together to deliver electricity more reliably and efficiently can greatly reduce the frequency and duration of power outages, reduce storm impacts, and restore service faster when outages occur. Consumers can better manage their own energy consumption and costs because they have easier access to their own data. Utilities also benefit from a modernized grid, including improved security, reduced peak loads, increased integration of renewables, and lower operational costs. Smart Grid technologies are made possible by two-way communication technologies, control systems, and computer processing. These advanced technologies include advanced sensors that allow operators to assess grid stability, advanced digital meters that give consumers better information and automatically report outages, relays that sense and recover from faults in the substation automatically, automated feeder 69 switches that re-route power around problems, and batteries that store excess energy and make it available later to the grid to meet customer demand. A smart grid is an electricity network based on digital technology that is used to supply electricity to consumers via two-way digital communication. This system allows for monitoring, analysis, control and communication within the supply chain to help improve efficiency, reduce energy consumption and cost, and maximize the transparency and reliability of the energy supply chain. The smart grid needs to be introduced in Belarus with the aim of overcoming the weaknesses of conventional electrical grids by using smart net meters [2]. Smart grid is equally advantageous for enterprises, retail stores, hospitals, universities and multinational corporations. The entire smart grid system is automated for tracking the electricity consumption at all the locations. Grid architecture is also combined with energy management software for estimating the energy consumption and its associated cost for a specific enterprise. Generally, electricity prices increase along with demand. By providing consumers with information about current consumption and energy prices, smart grid energy management services help to minimize the consumption during high-cost, peak-demand times. A modern smart grid system has the following capabilities. It can repair itself. It encourages consumer participation in grid operations. It ensures a consistent and premium-quality power supply that resists power leakages. It allows the electricity markets to grow and make business. It can be operated more efficiently [2]. The basic concept of smart grid is to add monitoring, analysis, control, and communication capabilities to the national electrical delivery system to maximize the throughput of the system while reducing the energy consumption. Smart 70 grid initiatives seek to improve operations, maintenance and planning by making sure that each component of the electric grid can both talk and listen. Another major component of smart grid technology is automation. In many places, a power company will only know that service is out if a customer calls. In a smart grid scenario, if service is interrupted the company will know right away because certain components of the grid (smart meters in the affected area, for instance) stop sending sensor data. By ensuring that all the components of the grid – from transformers to power lines to home electric meters – have IP addresses and are capable of two-way communication, the company can manage distribution more efficiently, be proactive about maintenance and respond to outages faster [1]. One of the radically new concepts of smart grid to be introduced in Belarus is micro grids, which are generally defined as low voltage grids with distributed generation sources, power storage devices and controlled loads (heaters and air conditioners). An important property of micro grids is that, despite functioning within the distribution system, they can automatically be transferred to an isolated state in the event of network failures and restore synchronization with the network after eliminating the accident while maintaining the required quality of electrical energy. Smart-micro grids can effectively cover the growing consumer demand due to the growth of electricity revenues from renewable energy sources. In a micro grid, energy resources can’t be completely planned, intellectual systems are combined with the communication infrastructure to provide control on the demand side, and through it – the balance between supply and demand. Governments and power companies across the world have recognized that the traditional grid, which has not significantly changed in 100 years, must be replaced by more 71 efficient, flexible and intelligent energy-distribution networks, called smart grids. These are digitally monitored, self-healing energy systems that deliver electricity or gas from generation sources, including distributed renewable sources, to points of consumption. They optimize power delivery and facilitate two- way communication across the grid, enabling end-user energy management, minimizing power disruptions and transporting only the required amount of power. The result is a lower cost to the utility and the customer, more reliable power, and reduced carbon emissions [3]. References: 1. Smart Grid: What is it and why is it important? [Electronic resource]. – Mode of access: http://www. nema.org/Policy/Energy/Smartgrid/pages/default.aspx. – Date of access: 05.03.2018. 2. Smart Grid [Electronic resource]. – Mode of access: https://www.techopedia.com/definition/692/smart-grid. – Date of access: 02.03.2018. 3. What is the Smart Grid [Electronic resource]. – Mode of access: https://www.smartgrid.gov/the_smart_grid/smart- grid.html. – Date of access: 05.03.2018. 72 УДК 811.111: 6243.4 Oshukovskaya O., Vanik I. Gun Control Should Be Stricter Belarusian National Technical University Minsk, Belarus The United States of America is one of the most well- armed countries in the world, and weapons, in some way, have become an integral part of the American DNA. In fact, the country consolidated the right to bear arms in its constitution. The inhabitants of America own about 300 million pieces of firearms. Almost every month world news teems with headlines about murders and often mass in the US. All over the country, rallies and protests are regularly held with demands for restrictions of weapons or a complete ban. Many questions arise. For example, if there is such a situation in the country, why have not the relevant measures been taken? It is necessary to look back at the American history. The first mention of weapons begins with hunting, which is still considered an occupation for real men. Then it plunges us into a time when America was a British colony, oppressed by the British taxation system. Dissatisfied citizens raised an uprising, to which the London government responded by banning the importation of all sorts of ammunition into the colony. After these events, the government reduced the army of the country and began to rely more and more on the militia, which literally forced people to carry weapons with them constantly, feeling insecure [1]. Since then, the Americans have begun to argue that the ability to carry weapons is one of their fundamental rights. When the United States soon proclaimed independence from Britain, this right was fixed in the constitution as the Second Amendment. Then the problems began to grow. The threats of 73 the mafia, the killing of such famous people as John Kennedy and Martin Luther King made the government began to demand from the arms dealers to get a license at least. Then the National Rifle Association began to play a role in the country's politics. The ideology was very simple: a protest against any attempts to introduce weapons legislation. With the help of lobbyism and support for parties and candidates the Rifle Association managed to block almost all attempts to limit the ability of Americans to buy weapons. At the same time the association successfully supported lawsuits, as a result of which the Supreme Court in recent years has expanded protection of arms owners against state interference. Further, the federal government has lost the right to forbid citizens from carrying charged weapons, despite the fact that every year it causes thousands of accidents, including those involving shooting at children [1]. Take at least one of the incidents, happened just recently. On February 14, 2018, in the Parkland school, Florida, 17 people were killed, including adults and children. The shooter was a former student of the school, 19- year-old Nicholas Cruz. The motives for the attack are unclear [2]. Despite all the disputes, it is difficult to make new restrictions on weapons. Shocking the fact that with so many mass executions occurred in recent years, the number of murders in the US since the mid-1990s has fallen dramatically. In addition, the effectiveness of restrictions on weapons, which the liberals are calling for, is far from obvious. Also the statists figured out various circumstances that might have saved the victims of certain crimes, which would happen if one or another of the restrictions were imposed. What seemed right to them turned out to be completely ineffective. In the meantime, the demands to cancel the Second Amendment are growing louder. People hold various rallies and actions in support of victims of crimes with the requirements of restricting or completely banning weapons. But this requires a tectonic shift 74 in the public mood, which on the horizon is not yet visible. Supporters of the right to arms are also holding rallies today. They suspect that any concession will only cause new demands, since the final aim of restrictors is a complete ban and confiscation. And this directly affects the essence of the American mentality. Confiscation of weapons is not possible until in the United States the Second Amendment is in force and if it is canceled it will be very problematic [3]. As polls of many newspapers and magazines show that 50% of the Americans are worried about the prospects that the authorities will go too far in limiting their right to own weapons, 45%, on the contrary, believe that the authorities should more control the weapons [3]. Thus, it is impossible to reach a final decision on the issue of limiting or banning firearms in the US. References: 1. The many reasons Americans own firearms [Electronic resource]. – Mode of access: https://www.wsj.com/articles/the- many-reasons-americans-own-firearms-1508262134. – Date of access: 01.03.2018. 2. 17 confirmed dead in “horrific” attack on Florida high school – as it happened [Electronic resource]. – Mode of access: https://www.theguardian.com/usnews/live/2018/feb/14/ florida-school-shooting-live-updates-latest-news-majority- stoneman-douglas. – Date of access: 21.02.2018. 3. I used to think gun control was the answer. My research told me otherwise [Electronic resource]. – Mode of access: https//www.washingtonpost.com/opinions/i-used-to-think-gun- control-was-the-answer-my-research-told-me- otherwise/2017/10/03/d33edca6-a851-11e7-92d1- 58c702d2d975_story.html?utm_term=.cd0f38da1784. – Date of access: 10.02.2018. 75 УДК 623.4.084.5 Dovzhenko P., Vasilieva T. Autonomous Cars: Future or Reality? Belarusian National Technical University Minsk, Belarus If there’s one topic that gets a lot of attention lately in the media, the public policy sphere, and in general health and wellness discussions, it is how to make the roadways safer. According to the Centers for Disease Control, fatalities from traffic incidents happen on an annual basis upwards of 33,000 people [1]. Many of these accidents are preventable, and an alarming number of them are a result of distracted driving. In the past few years, as a result of the number of traffic accidents plaguing the country and the devastating injuries and fatalities that result from them, a greater push has been made in the sphere of technology to make cars safer, drivers more aware, and accidents less likely [2]. So there are many ways to make car trip safer, and now we can watch the development of autopilot cars. An autonomous car and unmanned ground vehicle is a vehicle that is capable of sensing its environment and navigating without human input [3]. Autonomous cars use a variety of techniques to detect their surroundings, such as radar, laser light, GPS, odometer and computer vision. Advanced control systems interpret sensory information to identify appropriate navigation paths, as well as obstacles and relevant signage. Autonomous cars must have control systems that are capable of analyzing sensory data to distinguish between different cars on the road. The potential benefits of autonomous cars include reduced mobility and infrastructure costs, increased safety, increased mobility, increased customer satisfaction and reduced crime. Autonomous cars are predicted 76 to increase traffic flow; provide enhanced mobility for children, the elderly, disabled and the poor; relieve travelers from driving and navigation chores; lower fuel consumption; significantly reduce needs for parking space; reduce crime; and facilitate business models for transportation as a service, especially via the sharing economy. This shows the vast disruptive potential of the emerging technology. In spite of the various benefits to increased vehicle automation, challenges exist, such as technology challenges, disputes concerning liability, resistance by individuals to forfeit control of their cars, customer concern about the safety of driverless cars, implementation of a legal framework and establishment of government regulations; risk of loss of privacy and security concerns, such as hackers or terrorism; concerns about the resulting loss of driving-related jobs in the road transport industry; and risk of increased suburbanization as travel becomes less costly and time-consuming. Many of these issues are due to the fact that autonomous objects, for the first time, allow computers to roam freely, with many related safety and security concerns. As we see nowadays tested autopilots are not safe due to it’s dependence of road situation. If the road is full of different sudden objects, they can mislead sensors, so autopilot’s programme may react wrong. A classification system of autopilots based on six different levels (ranging from fully manual to fully automated systems) was published in 2014 by SAE International, an automotive standardization body, as J3016, Taxonomy and Definitions for Terms Related to On-Road Motor Vehicle Automated Driving Systems [4]. This classification system is based on the amount of driver intervention and attentiveness required, rather than the vehicle capabilities, although these are very loosely related. Level 0: Automated system issues warnings and may momentarily intervene but has no sustained vehicle control. 77 Level 1 (”hands on”): Driver and automated system shares control over the vehicle. An example would be Adaptive Cruise Control (ACC) where the driver controls steering and the automated system controls speed. Using Parking Assistance, steering is automated while speed is manual. The driver must be ready to retake full control at any time. Lane Keeping Assistance (LKA) Type II is a further example of level 1 self driving. Level 2 (”hands off”): The automated system takes full control of the vehicle (accelerating, braking, and steering). The driver must monitor the driving and be prepared to immediately intervene at any time if the automated system fails to respond properly. The shorthand ”hands off” is not meant to be taken literally. In fact, contact between hand and wheel is often mandatory during SAE 2 driving, to confirm that the driver is ready to intervene. Level 3 (”eyes off”): The driver can safely turn their attention away from the driving tasks, e.g. the driver can text or watch a movie. The vehicle will handle situations that call for an immediate response, like emergency braking. The driver must still be prepared to intervene within some limited time, specified by the manufacturer, when called upon by the vehicle to do so. The 2018 Audi A8 Luxury Sedan was the first commercial car to claim to be able to do level 3 self driving. The car has a so-called Traffic Jam Pilot. When activated by the human driver, the car takes full control of all aspects of driving in slow-moving traffic at up to 60 kilometers per hour. The function works only on highways with a physical barrier separating oncoming traffic. Level 4 (”mind off”): As level 3, but no driver attention is ever required for safety, i.e. the driver may safely go to sleep or leave the driver's seat. Self driving is supported only in limited areas (geofenced) or under special circumstances, like traffic jams. Outside of these areas or circumstances, the 78 vehicle must be able to safely abort the trip, i.e. park the car, if the driver does not retake control. Level 5 (”steering wheel optional”): No human intervention is required. An example would be a robotic taxi. Nowadays cars are coming more completed by using different safety systems such as Brake Assistant, Park Assistant, Automated Cruise Control (ACC) and etc., so they don’t need smooth driver’s reaction or any other specific driving skills, and we can attribute it to level 2 [5]. It means that in the near future cars will be automated for about 90%, and as result the number of human losses will decrease rapidly. References: 1. Autonomous Cars [Electronic resource]. – Mode of access: https://www.theverge.com/autonomous-cars. – Date of access: 30.03.2018. 2. History of autonomous cars [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/History_of_autonomous_ cars. – Date of access: 30.03.2018. 3. Autonomous car [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/Autonomous_car. – Date of access: 30.03.2018. 4. A Short History of Mercedes-Benz Autonomous Driving Technology [Electronic resource]. – Mode of access: https://www.autoevolution.com/news/a-short-history-of- ercedes-benz-autonomous-driving-technology-68148.html. – Date of access: 30.03.2018. 5. Driverless cars of the future: How far away are we from autonomous cars? [Electronic resource]. – Mode of access: http://www.alphr.com/cars/1001329/driverless-cars-of-the- future-how-far-away-are-we-from-autonomous. – Date of access: 30.03.2018. 79 УДК 656.135.025.4:004:811.111 Butakova A., Ladutska N. How Information Technologies Impact Transportation Belarusian National Technical University Minsk, Belarus Better mobility improves the quality of our life and encourages individuals and organizations to contribute to the growth of the economy. Intelligent Transport Systems include many methods for enhancing the mobility of people and freight in all transportation modes. Intelligent transportation systems vary in technologies applied, from basic management systems such as car navigation; traffic signal control systems; container management systems; variable message signs; automatic number plate recognition or speed cameras to monitoring applications, such as security CCTV systems; and to more advanced applications that integrate live data and feedback from a number of other sources, such as parking guidance and information systems; weather information; bridge deicing systems; and the like. Some of the constituent technologies typically implemented in ITS are described below. Video Vehicle Detection. Traffic flow measurement and automatic incident detection using video cameras is another form of vehicle detection. Since video detection systems such as those used in automatic number plate recognition do not involve installing any components directly into the road surface or roadbed, this type of system is known as a non-intrusive method of traffic detection. Video from black-and-white or color cameras is fed into processors that analyze the changing characteristics of the video image as vehicles pass. The 80 cameras are typically mounted on poles or structures above or adjacent to the roadway. Electronic toll collection (ETC). Electronic toll collection (ETC) makes it possible for vehicles to drive through toll gates at traffic speed, reducing congestion at toll plazas and automating toll collection. Originally ETC systems were used to automate toll collection, but more recent innovations have used ETC to enforce congestion pricing through cordon zones in city centers and ETC lanes. Cordon Zones with Congestion Pricing. Cordon zones have been implemented in Singapore, Stockholm, and London, where a congestion charge or fee is collected from vehicles entering a congested city center. This fee or toll is charged automatically using electronic toll collection or automatic number plate recognition, since stopping the users at conventional toll booths would cause long queues, long delays, and even gridlock. The main objective of this charge is to reduce traffic congestion within the cordon area [1]. BelToll. The BelToll electronic toll collection system was implemented and operated by Kapsch TrafficCom in Belarus. The BelToll road network now comprises 1,189 kilometers. The system plays an important part in the efficient functioning of the traffic system. Not only do registered participants not need to stop at the toll gates, the electronic system also minimizes the risk of traffic back-ups and reduces emission levels. BelToll involves electronic collection of toll fees. The on-board units installed in the vehicles use microwave technology to communicate with the road-side infrastructure. Vehicles with a total weight of more than 3.5 tons as well as vehicles with a total weight of less than 3.5 tons that are registered outside of the customs union of Belarus, Russia, and Kazakhstan are required to pay tolls [2]. 81 Automatic Road Enforcement. A traffic enforcement camera system, consisting of a camera and a vehicle- monitoring device, is used to detect and identify vehicles disobeying a speed limit or some other road legal requirement and automatically ticket offenders based on the license plate number. Traffic tickets are sent by mail [1]. GPS. GPS, known originally as NAVSTAR GPS, is a satellite-based radio – navigation system designed and developed by the US Department of Defense as a navigational aid. This system allows an unlimited number of GPS receivers located anywhere in the earth’s surface and in view of the GPS satellites to accurately determine position, velocity and time. At present, GPS is used by numerous transportation agencies, country and local governmental agencies and transportation engineering consultants. GPS is increasingly being used for transportation applications by the private sector as well, with the invent of in vehicle navigation systems and fleet tracking systems [3]. Radio frequency identification (RFID) tags. Radio- frequency identification (RFID) uses electromagnetic fields to automatically identify and track tags attached to objects. The tags contain electronically stored information. Passive tags collect energy from a nearby RFID reader's interrogating radio waves. Active tags have a local power source (such as a battery) and may operate hundreds of meters from the RFID reader. Unlike a barcode, the tag need not be within the line of sight of the reader, so it may be embedded in the tracked object. RFID is one method for Automatic Identification and Data Capture (AIDC). Yard management, shipping and freight and distribution centers use RFID tracking. In the railroad industry, RFID tags mounted on locomotives and rolling stock identify the owner, identification number and type of equipment and its characteristics. This can be used with a database to identify the 82 lading, origin, destination, etc. of the commodities being carried. RFID tags are used to identify baggage and cargo at several airports and airlines [4]. Information technologies have become an integral part of our daily lives, and the transport industry is no exception. Transport industry is becoming more and more computerized. Transportation companies have always been active in developing new software tools to improve transportation efficiency while reducing overall transportation costs. References: 1. Intelligent transport system [Electronic resource]. – Mode of access: http://studymafia.org/wp- content/uploads/2015/07/Civil-Intelligent-Transportation- System-report-ITS.pdf. – Date of access: 27.03.2018. 2. Kapsch TrafficCom [Electronic resource]. – Mode of access: https://www.kapsch.net/it/ktc/press/ktc_140807_pr. – Date of access: 02.04.2018. 3. GPS Application in Transportation System [Electronic resource]. – Mode of access: http://www.academia.edu/3862629/GPS_Applications_in_Tran sportation_System. – Date of access: 30.03.2018. 4. Radio-frequency identification [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/Radio- frequency_identification. – Date of access: 29.03.2018. 83 УДК 656.025.4=111 Panova T., Lulenko K., Ladutska N. Intermodal Transport as a Way to Reduce Costs Belarusian National Technical University Minsk, Belarus Shippers always look for ways to cut costs and improve service, that’s why they consider transportation mode options when moving goods long distances. While trucking remains the most dominant mode of shipping product domestically, intermodal freight transport offers opportunities for freight savings and reduced emissions, especially when transporting products over distances of 700 km or more. Intermodal freight transport involves the transportation of freight in an intermodal container or vehicle, using multiple modes of transportation(e.g., rail, ship, and truck), without any handling of the freight itself when changing modes. The method reduces cargo handling, and so improves security, reduces damage and loss, and allows freight to be transported faster. Reduced costs over road trucking is the key benefit for inter-continental use. This may be offset by reduced timings for road transport over shorter distances. There are a lot of different transport modes to carry goods. Container ships are used to transport containers by sea. These vessels are custom-built to hold containers. Some vessels can hold thousands of containers. Their capacity is often measured in TEU or FEU. These initials stand for twenty- foot equivalent unit, and forty-foot equivalent unit, respectively. For example, a vessel that can hold 1,000 40-foot containers or 2,000 20-foot containers can be said to have a capacity of 2,000 TEU. Railways. In North America, containers are often shipped by rail in container well cars. 84 These cars resemble flatcars but the newer ones have a container-sized depression, or well, in the middle (between the bogies or trucks) of the car. This depression allows for sufficient clearance to allow two containers to be loaded in the car in a double stack arrangement. The newer container cars also are specifically built as a small articulated unit, most commonly in components of three or five, whereby two components are connected by a single bogie as opposed to two bogies, one on each car. Double stacking is also used in parts of Australia. If the rail line has been built with sufficient vertical clearance then Double-stack rail transport can be used. Where lines are electrified with overhead electric wiring double stacking is normally not possible. The mandatory requirement to fit under overhead wire for the traction engine electrical power supply sets the height limit for the railcars to allow for trailer transport. This requires a certain low building height which led to a minor size of wheels for the railcars. Hence increased degradation of bogeys by wheel wear-out is a cost disadvantage for the system. When carried by rail, containers can be loaded on flatcars or in container well cars. In Europe, stricter railway height restrictions (smaller loading gauge and structure gauge) and overhead electrification prevent containers from being stacked two high, and containers are hauled one high either on standard flatcars or other railroad cars [1]. The Belarusian Railway is a leading and one of the most important transport system of the republic, which transports up to 70% of all cargoes carried by the national public transport. One of the key goals of the Belarusian Railway is to make transit services more attractive to customers. To achieve the goal it provides direct fast container trains and implements new projects. Today about 20 container trains are in a regular 85 operation on the Belarusian Railway, and the number is growing. The railway infrastructure of the Republic of Belarus may be used effectively, and serve as a link between the East and the West within a single Eurasian transport area, if volumes of cargo transportation by container trains grows worldwide [2]. Trucks. Trucking is frequently used to connect the linehaul ocean and rail segments of a global intermodal freight movement. This specialized trucking that runs between ocean ports, rail terminals, and inland shipping docks, is often called drayage, and is typically provided by dedicated drayage companies or by the railroads. Barges. Barges utilising ro-ro and container-stacking techniques transport freight on large inland waterways such as the Rhine/Danube in Europe and the Mississippi River in the United States [3]. Land bridges. The term land bridge is commonly used in the intermodal freight transport sector in reference to a containerized ocean freight shipment that travels across a large body of land for a significant part of the trip, en route to its final destination. The land portion of the trip is referred to as the land bridge and the mode of transport used is rail transport. Planes. Generally modern, bigger planes usually carry cargo in the containers. Sometimes even the checked luggage is first placed into containers, and then loaded onto the plane. Of course because of the requirement for the lowest weight possible (and very important, little difference in the viable mass point), and low space, specially designed containers made from lightweight material are often used. Due to price and size, this is rarely seen on the roads or in ports. Pipelines. Pipelines are part of the intermodal freight transportation network as they are the preferred mode of transporting gas and liquids. Often unrecognized by the general 86 public due to their placement underground, pipelines contribute to freight transport and are critical to the economy. Compared to trucks and trains, pipelines are less damaging to the environment, less susceptible to theft and more economical, safe, convenient and reliable [3]. Intermodal freight transportation gives you lower rates, more predictable pricing, and the flexibility of loading and unloading goods, which reduces handling costs. Additionally, you can significantly reduce your carbon footprint. Intermodal transport makes it significantly easier to organize and optimize the available transport resources. That is why it is the best alternative to traditional transport. References: 1. Intermodal freight transport [Electronic resource]. – Mode of access: https://studfiles.net/preview/6334415/page:63/. – Date of access: 13.03.18. 2. Cargo transportation [Electronic resource]. – Mode of access: http://www.rw.by/en/freight/profile/. – Date of access: 18.03.18. 3. Intermodal freight transport [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/Intermodal_freight_transport. – Date of access: 11.03.18. 87 УДК 629.33+629.35=111 Khadasevich U., Ladutska N. How to Ship a Car Easily and Affordably Belarusian National Technical University Minsk, Belarus Shipping a car is not the same as shipping a package. They’re bulkier, more expensive, and slower moving. It is a complex process because various factors have to be taken into account. For instance, there are the strict immigration requirements found in various countries. The process itself is time-consuming and you have to go through the thorough rigors of ensuring all your travel documents are in order. The way of shipping your car abroad is also very important. There are two main commonly used methods of getting your car cheaply and safely across the ocean. They are RoRo and Container Shipping. Many auto shipping companies offer car transportation services between the two countries. They typically will give you the option of shipping your car either in a container or on board a Roll on Roll (RoRo) off vessel. Container ships (sometimes spelled containerships) are cargo ships that carry all of their load in truck-size intermodal containers, in a technique called containerization. They are a common means of commercial intermodal freight transport and now carry most seagoing non-bulk cargo. Container ship capacity is measured in twenty-foot equivalent units (TEU). Typical loads are a mix of 20-foot and 40-foot (2-TEU) ISO- standard containers, with the latter predominant. Container ships are distinguished into 7 major size categories: small feeder, feeder, feedermax, Panamax, Post-Panamax, New Panamax and ultra-large. As of December 2012, there were 161 container ships in the VLCS class (Very Large Container 88 Ships, more than 10,000 TEU), and 51 ports in the world can accommodate them. Container ships under 3,000 TEU are generally called feeders. Feeders are small ships that typically operate between smaller container ports. Some feeders collect their cargo from small ports, drop it off at large ports for transshipment on larger ships, and distribute containers from the large port to smaller regional ports [1]. If you choose the first option, your car will be loaded into a 20-foot or 40-foot steel container and transported to Belarus on board a container vessel. As nobody wants to pay for the space in the shipping container that has not been used, the empty space around your car is sold to people shipping luggage, boxes and other small items which make it cheaper for you as you share the cost of shipping a car with others or you can choose a 20 foot container for your exclusive use. Roll-on/roll-off (RORO or ro-ro) ships are vessels designed to carry wheeled cargo, such as cars, trucks, semi-trailer trucks, trailers, and railroad cars, that are driven on and off the ship on their own wheels or using a platform vehicle, such as a self- propelled modular transporter. This is in contrast to lift-on/lift- off (LoLo) vessels, which use a crane to load and unload cargo. RORO vessels have either built-in or shore-based ramps that allow the cargo to be efficiently rolled on and off the vessel when in port. While smaller ferries that operate across rivers and other short distances often have built-in ramps, the term RORO is generally reserved for large oceangoing vessels. The ramps and doors may be located in stern, bow or sides, or any combination thereof [2]. Types of RORO vessels include ferries, cruise ferries, cargo ships, barges, and RORO service for air deliveries. New automobiles that are transported by ship are often moved on a large type of RORO called a pure car carrier (PCC) or pure car/truck carrier (PCTC). Elsewhere in the shipping industry, cargo is normally measured by the metric tonne, but RORO 89 cargo is typically measured in lanes in metres (LIMs). This is calculated by multiplying the cargo length in metres by the number of decks and by its width in lanes (lane width differs from vessel to vessel, and there are several industry standards). On PCCs, cargo capacity is often measured in RT or RT43 units (based on a 1966 Toyota Corona, the first mass-produced car to be shipped in specialised car-carriers and used as the basis of RORO vessel size. 1 RT is approximately 4m of lane space required to store a 1.5m wide Toyota Corona) or in car- equivalent units (CEU). If you choose the RoRo option to ship a car, your car will be transported in a special kind of vessel that allows vehicles to be directly driven on board the ship for transportation. This is the cheapest option but there are some disadvantages. roro advantages container advantages  shorter delivery time  greater safety  one of the cheapest options available  you can fill up your car with whatever you want  you can load the empty space at the discretion of yours  the vehicle does not have to be in full working order  the car is locked, the container sealer and no one has an access to it roro disadvantages container disadvantages  the car must meet requirements  the car must be in full working order, can’t be broken  the tank must be almost empty  shipping goods in the car  longer delivery time  the car must not exceed the dimensions of 6.1 m*2.4 m*2.6 m  if you do not fill the entire space of the container you have to pay for the empty space  you share the space with others 90 is prohibited  the car is exposed, greater possibility of damages and theft The cost to ship a car can vary significantly depending on vehicle size, the distance between origin and destination, the shipping method that is used and the shipping company. For example, vehicle shipping rates to Belarus from the USA are between $1,500 to $4,500 [3]. The shipment can take up to 17 days, so it’s better to ask your shipping company to notify you on the exact date of arrival so you can submit all the required documents to the Customs and Border Protection (CBP). The vehicle will be thoroughly inspected at your chosen port of entry. References: 1. Container ship [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/Container_ship. – Date of access: 15.04.2018. 2. Roll-on/roll-off [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/Roll-on/roll-off. – Date of access: 29.03.2018. 3. Car transport to Europe – how to get your car from America back home [Electronic resource]. – Mode of access: https://www.carfax.eu/article/car-transport-to-europe.html. – Date of access: 16.03.2018. 91 УДК 811.111:62 Ganushchenko A., Lichevskaya S. Upcoming Technology Belarusian National Technical University Minsk, Belarus We have seen great leaps in digital technology in the past five years. Smartphones, cloud computing, multi-touch tablets, these are all innovations that revolutionized the way we live and work. However, we are just getting started. Technology will get even better. In the future, we could live like how people in science fiction movies did. The following five upcoming, real-life products are set to revolutionize the world, as we know it. 1. Paper diagnostics 2020 Experimental paper sensors that detect chemical or biological molecules have proved to be easy to use without the need for pricy equipment or trained specialists. They could have broader applications, such as treating neglected tropical diseases, mostly because pharmaceutical companies are focusing on widespread maladies that have a larger market. In addition to saving hundreds of thousands of lives each year in the developing world, these paper-based tests could stem health care costs by allowing home-based disease testing in developed regions [1]. 2. Smart clothing 2028 DuPont Advanced Materials (DuPont) have announced availability of its newest generation of stretchable electronic inks and films for smart clothing. Smart clothing technology provides critical biometric data including heart rate, breathing rate, form awareness and muscle tension. Intexar™ offers 92 superior stretch and comfort and is easily integrated into garments to make smart clothing. Garments powered by Intexar™ can endure over 100 washes, and continue to perform through repeated stretching and demanding performance [2]. 3. Cheap solar power 2033 A cheap solar panel system will forever be the best solution to expensive electric bills. Solar cells are getting cheaper each year. While you could pay up to $10,000 for an off-the-shelf installation and could cover the system’s price in just over 10 years, it is still better and more educational to make one yourself. Let us face it: we are living in a war right now. The battle for energy efficiency has never been fought with more advanced weaponry, and the winners are all those who pay less for more month after month. The first line of defense against paying more for electricity than you did last year is building your own solar panel system. It would certainly be nice being energy independent, let alone having an electric car that you could power with those solar cells to give you free rides for the rest of your life [3]. 4. 3D printing in every home 2037 The ability to design and manufacture a physical object using 3D printing was a major technological breakthrough. Today, just a few years since it was first introduced, there is already talk of how it is reshaping our future. Imagine if 3D printers could be found in every home. What would this mean? So say your dishwasher breaks down and needs just one, single part replaced – wouldn’t it be easier and faster, not to mention significantly cheaper, to print the part at home versus 93 calling the store, figuring out your warranty, ordering the aforementioned part, and waiting for it to arrive? It is not entirely as simple as that. There is a lot more to 3D printing than loading the paper tray and pressing print. The prospect of owning a 3D printer in every home is an exciting one, and as the technology continues to advance, 3D printers are able to produce more and more objects of varying materials on demand. However, while the potential applications are promising, the fate of 3D printing as a necessary appliance in households is still unclear [4]. 5. Holographic pets 2041 We have recently seen iRobot go public and its IPO did quite well. Each year iRobot is introducing new models in their consumer division. Some say that cats and dogs do not like these robots much, but animals often follow the robot around the house and cats stock it and then pounce and then run away. So indeed iRobot has in fact become part of the family and people would not want it any other way. While you are gone, you might set it to display various animals that your cat might like to hunt. Such as a pigeon landing on it and then taking off again – your cat will no doubt find this challenging and intriguing and it will hone their hunting skills. The vacuum might have a random projection set of 10-12 holographic images to keep your cat entertained and more than occupied [5]. It is obvious that modern life is impossible without rapid technological progress. That is why despite increasing number of health problems, atmosphere pollution, huge nature damage, people continue to introduce innovations in the field of technology. New technologies are for good. Technological progress continues and it moves rather fast. 94 References: 1. Paper Diagnostic Tests Could Save Thousands of Lives [Electronic resource]. – Mode of access: https://www.scientificamerican.com/article/paper-diagnostic- tests-could-save-thousands-of-lives/. – Date of access: 25.03.2016. 2. New smart clothing technology [Electronic resource]. – Mode of access: https://www.printedelectronicsworld.com/articles/11421/new- smart-clothing-technology. – Date of access: 28.07.2017. 3. Solar Panel System: How to Build a Cheap One [Electronic resource]. – Mode of access: https://www.greenoptimistic.com/solar-panel-system/. – Date of access: 22.01.2015. 4. Will We Ever Have 3D Printers in Every Home? [Electronic resource]. – Mode of access: https://futurism.com/will-ever-3d- printers-every-home/. – Date of access: 27.01.2016. 5. Holographic Projection Technologies of the Future [Electronic resource]. – Mode of access: http://www.worldthinktank.net/pdfs/holographictechnologies.p df. – Date of access: 05.05.2017. 95 УДК 62:811.111 Kirilyuk A., Mandik N., Lichevskaya S. 5 Ideas of Elon Musk Belarusian National Technical University Minsk, Belarus Elon Musk, a South African business magnate, investor, engineer and inventor has a vision to change the world and humanity. Here are his five ideas. 1) Internet Satellites Musk wants to offer low-cost and unfettered internet access for all. He means to do it by launching a fleet of satellites into space of course. Musk’s space transport SpaceX is in the early stages of producing a number (as many as 700) of micro satellites that can operate together in large formations. Each satellite will weigh 113kg, and will come in at a total cost of $1 billion [1, 3]. 2) Hyperloop Perhaps his plan to solve his own commuting frustrations on the east coast of America will do the trick. So, Musk frequently has to travel between San Francisco and Los Angeles to care to his dual duties at Tesla and SpaceX respectively. Musk’s proposed solution is to load yourself into an enormous shotgun shell and shoot yourself 400 miles across the state at 800 mph. The Hyperloop would transport people in individual aluminium pods through specially constructed overground tubes [1]. 3) Electric cars Still, back in 2006, Musk’s stated vision to help expedite the move from a mine-and-burn hydrocarbon economy towards a solar electric economy, seemed pretty lofty – even naive. Eight years on, it’s easy to forget what a profound impact 96 Musk’s work with Tesla Motors (which he funded using his vast fortune) has had on the perception and all-round viability of the electric car Launched in 2008, the Tesla Roadster was the first fully electric sports car, and the first roadworthy all- electric vehicle to enter serial production in the US. Tesla cars aren’t ugly, impractical prototypes, but classy, desirable, and perhaps most importantly practical vehicles. Tied into this is Musk’s and Tesla’s work in building an increasing network of charge stations around the US, and now in the UK and Europe as well. There are now 83 charging points across the European continent [1, 2]. 4) Affordable space travel Musk set up SpaceX in 2002 with a chunk of the proceeds from his $1.5 billion sale of PayPal to eBay. SpaceX was to be something rather unusual – a private space transport company. SpaceX’s goal is to dramatically lower the cost of space travel. This would help kickstart the flagging space programs of NASA and other institutions, and would even start to make space travel possible for normal citizens. This might sound fanciful, but SpaceX has already experienced some success here. The company’s Falcon 1 rocket became the first privately funded, liquid fuelled craft to enter Earth’s orbit in 2008. Then, in 2010, the company became the first to launch, orbit, and recover a spacecraft [4, 5]. 5) Colony on Mars Making space travel affordable might be an ambitious goal, but it isn’t a particularly sexy one. Nor does it sound remotely crazy. Indeed, as Musk tells it, it’s the idea that founded SpaceX. He wants to colonise Mars. The first stage in Musk’s ambitious plan was to set up a kind of greenhouse on the red planet – to send life the furthest it’s ever been. Images of lush foliage growing on the red planet would, in Musk’s estimation, reignite humanity’s thirst for space travel, and restore funding to major institutions like NASA. Musk long 97 ago realised that the problem in such a plan would be the cost of transport. That problem being on its way to resolution, he’s again been turning his attention to the idea of colonising Mars. Musk expects to be able to commence his company’s colonisation efforts in the mid-2030s, and to have a Mars colony up and running by 2040. If we have linear improvement in technology, as opposed to logarithmic, then we should have a significant base on Mars, perhaps with thousands or tens of thousands of people. This initial batch of people will need to pay their own way to Mars, but at an estimated $500,000, it may not be as expensive as you might have expected Elon Musk founded SpaceX with the long-term goal of developing the technologies that will enable a self-sustaining human colony on Mars. In 2015 he thought of sending a person to Mars in 11 or 12 years. According to Richard Branson, it would be absolutely realistic over the next 20 years to take literally hundreds of thousands of people to space. Buzz Aldrin, American engineer and former astronaut, and the second person to walk on the Moon, presented a master plan, for NASA consideration, for astronauts, with a tour of duty of ten years, to colonize Mars before the year 2040 humans could travel to Mars as early as 2024 with the aim of building a colony on the red planet. Musk's space exploration company SpaceX has laid out ambitious plans to establish a base on Mars after it unveiled a reusable rocket that could travel at speed so up to 27,000 kilometres per hour [1, 4, 5]. References: 1. Elon Musk's 5 craziest tech ideas for the future [Electronic resource]. – Mode of access: http://www.trustedreviews.com/opinion/elon-musk-5-craziest- tech-ideas-for-the-future-2920148. – Date of access: 28.03.2018. 98 2. Elon Musk's Tesla Roadster [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/Elon_Musk%27s_Tesla_ Roadster. – Date of access: 20.03.2018. 3. JPL Horizons On-Line Ephemeris System [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/JPL_Horizons_On- Line_Ephemeris_System. – Date of access: 16.03.2018. 4. Elon Musk says SpaceX will try to launch his Tesla Roadster on new heavy-lift rocket [Electronic resource]. – Mode of access: https://spaceflightnow.com/2017/12/02/spacex-will-try- to-launch-elon-musks-tesla-roadster-on-new-heavy-lift-rocket. – Date of access: 21.03.2018. 5. Launching Elon Musk's car toward Mars was a backup plan – here's what SpaceX actually wanted to do with Falcon Heavy's first flight [Electronic resource]. – Mode of access: http://www.businessinsider.com/starman-tesla-backup- payload-spacex-musk-nasa-2018-2. – Date of access: 17.03.2018. 99 УДК 159.925.8 Kukshinov A., Lichevskaya S. Body Language Belarusian National Technical University Minsk, Belarus Your body language doesn’t merely reflect your emotions. By learning some of the principal ways that your own posture, gestures, facial expression and even tone of voice affect your mind, you will be more aware of the factors influencing your mood, and give yourself an edge in presentations and negotiations. Let’s see some examples. Opening up your body and filling more space – known as a «power posture» – has been shown in studies to have a range of confidence-boosting effects [1, 2]. Body language is also very relevant to relationships outside of work, for example in dating and mating, and in families and parenting. The way you listen, look, move, and react tells the other person whether or not you care, if you’re being truthful, and how well you’re listening. When your nonverbal signals match up with the words you are saying, they increase trust, clarity, and rapport. When they do not, they can generate tension, mistrust, and confusion. Each gesture or movement can be a valuable key to an emotion a person may be feeling at the time. For example, the person who is feeling fearful or defensive might fold their arms or cross their legs or both. First of all it is good to realise that we do not talk continuously, but do give out signals continuously through body language when we are in someone else's company. Furthermore it is useful to look at the different levels on which we communicate. For the most part we communicate on the 100 content as well as relational level at the same time. Specifically we express the content through words and the relation through body language. Content level Of course we are talking about something when we talk to other people. We want to make something clear to the other person about a particular subject. This is the content of the conversation. At content level we say, or portray, what the message is about. It is usually the easiest to convey the content of a message through spoken language or commonly understood gestures. Due to the fact that the meaning of words, figures or signals that we use have been agreed to unilaterally, its form of expression does not need to bear any resemblance with what is denoted. The word clock for example has nothing to do with time. To understand the other person you need to speak his language. When the words or signals that we use to communicate do not bear any resemblance with what it denotes, we call this digital language [3]. The farther away from the brain a body part is positioned, the less awareness we have of what it is doing. For example, most people are aware of their face and what expressions and gestures they are displaying and we can even practise some expressions to put on a brave face or give a disapproving look, grin and bear it or look happy when Grandma gives you ugly underwear again for your birthday. After our face, we are less aware of our arms and hands, then our chest and stomach and we are least aware of our legs and almost oblivious to our feet [4]. Children were often told by their grandmothers to put on a happy face, wear a big smile and show your pearly whites when meeting someone new because Grandma knew, on an intuitive level, it would produce a positive reaction in others. The first recorded scientific studies into smiling were in the 66 The Magic of Smiles and Laughter early part of the nineteenth 101 century when French scientist Guillaume Duchenne de Boulogne used electrodiagnostics and electrical stimulation to distinguish between the smile of real enjoinment and other kinds of smiling. He analysed the heads of people executed by guillotine to study how the face muscles worked. He pulled face muscles [5, 6]. References: 1. Body Language [Electronic resource]. – Mode of access: http://www.lichaamstaal.com/english/body.html. – Date of access: 25.02.2018. 2. The Definitive Book of Body Language [Electronic resource]. – Mode of access: https://e- edu.nbu.bg/pluginfile.php/331752/mod_resource/content/0/All an_and_Barbara_Pease_- _Body_Language_The_Definitive_Book.pdf. – Date of access: 25.02.17. 3. Non-verbal Communication [Electronic resource]. – Mode of access: https://www.helpguide.org/articles/relationships- communication/nonverbal-communication.htm. – Date of access: 25.02.2018. 4. How to Communicate with Body Language [Electronic resource]. – Mode of access: https://www.wikihow.com/Communicate-With-Body- Language. – Date of access: 25.02.2018. 5. Body Language - Language Article [Electronic resource]. – Mode of access: https://english-magazine.org/english- reading/learn-language-articles/919-body-language. – Date of access: 25.02.2018. 6. Body Language [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/Body_language. – Date of access: 25.02.2018. 102 УДК 629.33.083.5:004.9:811.111 Koval D., Lapko O. Development of Technological Documentation for Maintenance and Repair Using a Modular Approach Belarusian National Technical University Minsk, Belarus The most important problem of technical operation of cars is the effective management of their working capacity since the effectiveness of using different service technologies allows to increase the resource, to reduce idle time and the costs of operation and guarantees high operational reliability. By the interstate standards it is established that technical maintenance and repair are necessary types of work in a standard life cycle of a product [1]. The information support of the technical maintenance system and the equipment repair along with the design documents includes organizational, technical and technology documents. According to expert assessment, the optimal performance of technology operations of technical maintenance and car repairs results in the increase of the between-repairs run of cars and cutting costs for technical maintenance and repair by 10- 15%, and allows to provide the planned operation resource. Technological processes of technical maintenance and repair are developed by all leading producers of cars in the process of new models production. The leading foreign manufacturers implemented the system of electronic technical documentation. For example, for cars of the German concern Daimler AG technology processes of technical maintenance and repair according to the service program Mercedes EWA net-WIS [2] are used, for service of the MAN cars the service program by MAN Workshop 103 Infosystem (MAN WIS) [3] is used, for Volvo – Volvo Impact 02-2015 (Bus & Lorry) is developed [4]. Similar programs are developed and are implemented by PJSC KAMAZ and Group of the GAZ company. Since 2000 JSC MAZ and BELNIIT Transtekhnika have been working at the creation of the specifications and technical documentation for technical maintenance and repair of MAZ new models and these results are still being used in servicing trucks and buses. During that period technology documentation practically for all new models of vehicles, including low- tonnage cars MAZ 4370, dump trucks MAZ 5516, MAZ 5551, truck tractors MAZ 5440, MAZ 6430, buses of the 1st and 2nd generations, trailers and semi-trailers was developed. Documentation was developed on papers. Besides, the complete set of documentation was prepared for each model. At the same time there is a problem in using such documentation, including its distribution in service centers (now the JSC MAZ has 30 service centers in the Republic of Belarus and 200 – in foreign countries). Besides, current trends in commercial automotive industry show that updating the model range is becoming rapid, manufacturers including JSC MAZ, tend to work under the requirements of the specific customer. Respectively there is a large number of modifications of cars for which the documentation in technical maintenance and repair is also necessary. To increase the efficiency of the development of MAZ cars service technology processes their development on basic models with the use of modular approach is offered. Actually it is the creation of the database of separate technology processes and the simplest assembly program of the technical process. The car consists of certain nodes and units. However, a simple assembly of a complete set will break real communication of nodes regarding their service. Therefore, the 104 binding of nodes and the systems of cars will be carried out in their location in the car. For example, the exhaust system of gases, the cooling system, rudder control, a frame, a cabin, the suspender, a front axle will be tied to a basic car model, and a supply system – to an engine make. Having specified a basic car model and models of the nodes and units, the program will make the process of technical maintenance or repair. The software in creation of technology processes is going to be implemented on the basis of the electronic service program Unified Information System on Technical Maintenance of Cars MAZ [5]. For full coverage of different options of a complete set it is necessary to develop technology processes of technical maintenance and repair for all component parts used in assembly. An integral part of the technology process is the labor input of work which can be presented in the form of the separate module with a simple search engine. Thus, the possibility of documentation use becomes much simpler. The list of the component parts used is covered by 37 models of vehicles for the development of technical maintenance processes and by 20 models for repair. These are biaxial and three-axial cars, truck tractors and platform buses. With the advent of new models with the units which are not included in the provided list the technology process only on this unit or a node will be developed. Processes contain technical requirements and instructions, the sequence and time allowance of operation performance, the equipment and the materials used, personnel qualification etc. Besides, with the change of technical requirements (for example, the use of new materials, the oils of better quality, etc.) modification of technical process won't be required. It will be enough to make changes in the database. The technology documentation developed on such principle will provide full functioning of the electronic service program 105 of JSC MAZ – A unified information system on technical maintenance of cars MAZ the analogs of which do not exist in the Republic of Belarus. Implementation of the project will allow to provide service maintenance of all model range of the equipment made by JSC MAZ, including new models of automatic telephone exchange. At the same time the changes in the system and repair will be immediately considered. The organization and repair of MAZ cars will be comparable with service support of the leading manufacturers of cars thus increasing the competitiveness of JSC MAZ cars. References: 1. ГОСТ 15.601-98. Система разработки и постановки продукции на производство. Техническое обслуживание и ремонт. Основные положения. 2. Mercedes EPS & WIS 04-2017 [Electronic resource]. – Mode of access: http://www.autocatalogues.com/catalogues/Mercedes_WIS_net .htm. – Date of access: 23.10.2017. 3. MAN Workshop Infosystem (MANWIS) [Electronic resource]. – Mode of access: http://www.autocatalogues.com/catalogues/manwis.htm. – Date of access: 23.10.2017. 4. The database of the technical information relating to the transport industry sorted by the producer and categories [Electronic resource]. – Mode of access: https://www.epcatalogs.com/volvo-impact-2015-bus-and-lor. – Date of access: 23.10.2017. 5. Minsk automobile plant [Electronic resource]. – Mode of access: http://maz.by/ru/products/spec-offer/rf/technical- service. – Date of access: 23.10.2017. 106 УДК 336.761:811.111 Vasilieva N., Podgurskaya V., Lapko O. Bull Position vs Bear Position Belarusian National Technical University Minsk, Belarus What are the financial markets? In fact, they go by many terms including capital markets, Wall Street and even simply the markets. Whatever you call them, financial markets are where traders buy and sell assets. These include stocks, bonds, derivatives, foreign exchange and commodities. It’s where companies reduce risks and investors make money. Although some financial markets are very small with little activity, some financial markets including the New York Stock Exchange (NYSE) and the Forex markets trade trillions of dollars of securities daily [1]. Financial markets create an open and regulated system for companies to get large amounts of capital. This is done through the stock and bond markets. Markets also allow these businesses to offset risk. They do this with commodities, foreign exchange future contracts and other derivatives. Since the markets are public, they provide an open and transparent way to set prices on everything traded. They reflect all available knowledge about everything traded. This reduces the cost of getting information, because it's already incorporated into the price. The sheer size of the financial markets provide liquidity. In other words, sellers can unload assets whenever they need to raise cash. The size also reduces the cost of doing business, since companies don't have to go far to find a buyer, or someone willing to sell. Financial market prices may not indicate the true intrinsic value of a stock due to macroeconomic forces like taxes. In addition, the prices of 107 securities are heavily reliant on informational transparency by the issuing company to ensure that efficient and appropriate prices are set by the market. The terms bull and bear market are used to describe how stock markets are doing in general – that is, whether they are appreciating or depreciating in value. At the same time, because the market is determined by investors' attitudes, these terms also denote how investors feel about the market and the ensuing trends. Simply put, a bull market refers to a market that is on the rise. It is typified by a sustained increase in market share prices. In such times, investors often have faith that the uptrend will continue over the long term [2]. Typically, in this scenario, the country's economy is strong and employment levels are high. By contrast, a bear market is one that is in decline. Share prices are continuously dropping, resulting in a downward trend that investors believe will continue, which, in turn, perpetuates the downward spiral. During a bear market, the economy will typically slow down and unemployment will rise as companies begin laying off workers [3]. A bear position is a term for a short position in a financial security. A bear position attempts to profit in a market by betting that prices will fall for certain securities. The short seller borrows securities in the hopes that prices will decline. When the price drops, the investor makes a profit on the price change. When the price rises, the investor loses money. There are also numerous alternative ways to initiate bear positions such as buying put options or buying inverse ETFs. A bear position is a trade or investment that is made in the hopes that the security's price will drop. If a short sale moves against the investor or trader, the trader may be exposed to unlimited losses since the price of the security can continue to rise. This is in contrast to a long position where the price of the security can move against the investor only a certain amount; that is, to 108 zero. The use of alternative strategies to initiate a bear position can mitigate some of these risks. A bull position is a long position in a financial security, such as a stock in the stock market. A bull or long position seeks to profit from rising prices in certain securities. When prices rise, a bull position becomes profitable. If prices fall, the bull position is not profitable. A bull or long position is the most well-known type of position and is what is typically used in buy and hold investing. An alternative way to initiate a bull position can include buying call options. A bull position is a trade or investment that is initiated in the hopes that the instrument's price will rise and make a profit. A bull market occurs when prices are rising, and is characterized by investor optimism and confidence that prices will continue to rise [2]. Outperform the market means doing better than the market average. It's also known as beating the market. It happens when your investment portfolio does better than the 7- 10 percent annual average the stock market has done over time. For example, an emerging markets fund outperforms the market when it has a higher return than the MSCI index. Market analysts use the term to recommend stocks they think you should buy [4]. Wouldn't it be better to put all your money in bonds and gold in a bear market, and switch to stocks and oil when a bull market begins? Yes, if you knew for sure that was happening. That's called timing the market. It's virtually impossible for even professional traders to do. How do you know when a bear market has begun? It starts with a market correction of a 10 percent decline. In a market crash, this can happen in a day. If you happen to miss it, then what do you do? Sell all your stocks, in the fear the correction turns into a bear market? Then you can be sure the market will go even higher the next day, and you've missed all your gains for the year. Although all bear markets start with a correction, not all corrections turn into 109 bear markets. You can see this for yourself by reviewing the 10 booms and busts since 1980. With diversification, you can gradually shift asset classes over time. You don't have as much at risk if you are wrong. That's the best way to outperform the market. References: 1. An Introduction to the Financial Markets [Electronic resource]. – Mode of access: https://www.thebalance.com/an- introduction-to-the-financial-markets-3306233. – Date of access: 13.03.2018. 2. Digging Deeper into Bull and Bear Markets [Electronic resource]. – Mode of access: https://www.investopedia.com/articles/basics/03/100303.asp. – Date of access: 11.03.2018. 3. How to Adjust Your Portfolio in a Bear or Bull Market [Electronic resource]. – Mode of access: https://www.investopedia.com/articles/investing/040313/how- adjust-your-portfolio-bear-or-bull-market.asp. – Date of access: 17.03.2018. 4. 5 Ways to Outperform the Market. Which One Is Safe? [Electronic resource]. – Mode of access: https://www.thebalance.com/outperform-the-market-3305874. – Date of access: 17.03.2018. 110 УДК 004.896=111 Krapivin S., Makarevich V., Matusevich O. Robots versus Artificial Intelligence Belarusian National Technical University Minsk Belarus What's the difference between robotics and artificial intelligence (AI)? First of all, robotics and AI serve very different purposes. However, folks often get them mixed up. A lot of people wonder if robotics is a subset of AI or if they are the same thing. The first point to clarify is that robotics and AI are absolutely different things. In fact, the two fields of study are almost entirely separated. If we represent a Venn diagram of these two aspects, it will be like this: Fig 1. Venn diagram We guess that people sometimes confuse these two concepts because of the overlap between them, namely Artificially Intelligent Robots. To understand how these three 111 terms relate to each other, let's examine each of them individually. Robotics is a branch of engineering and science which deals with the design, construction, operation and use of robots. The latter are programmable machines which are usually able to carry out a series of actions autonomously, or semi- autonomously. AI is a branch of computer science that involves developing computer programs to complete tasks which would otherwise require human intelligence. Colloquially, the term artificial intelligence is applied when a machine mimics cognitive functions that humans associate with other human minds, such as learning and problem solving. Roboticists are nowhere near achieving this level of AI, but they have made a lot of progress with more limited AI. Computers can already solve problems in limited realms. First, the AI robot or computer gathers facts about a situation through sensors or human input. The computer compares this information to stored data and decides what the information signifies. The computer runs through various possible actions and predicts which action will be most successful based on the collected information. The real challenge of AI is to understand how natural intelligence works. We do know that the brain contains billions of neurons, and that we think and learn by establishing electrical connections between different neurons. But we don't know exactly how all of these connections add up to higher reasoning, or even low-level operations. In any case, robots will certainly play a larger role in our daily lives in the future. In the coming decades, robots will gradually move out of the industrial and scientific worlds and into daily life, in the same way that computers spread to the home in the 1980s. According to The Verge 2017 tech report card, AI boomed this year like few other areas in tech, but despite the 112 scientific breakthroughs, glut of funding, and new products rolling out to consumers, the field has problems that can’t be ignored [1]. Some of these, like company-driven hype and sensationalist headlines, need better communication from the media and experts. Others challenges are more nuanced and will take longer to address, such as bias in algorithms and the growing threat of tech firms becoming AI monopolies as they hoover up data and talent. Where robots seem to be most powerful is as threat to the workplace – and not just manual labor, but white collar professions, like those in the legal and insurance industries. The past year has seen new studies confirming that robots do indeed destroy jobs, and they are likely to increase inequality. The greater threat, say some experts, is not unemployment, but bad employment, as automation creates a small number of high-skilled, high-paying jobs, but pushes others into low-paid and precarious work that only looks peachy in labor statistics [2]. Fig 2. Illustration of productivity to employment Another question of whether robots can be humans also misses a crucial point. It’s not about whether AI can help robots become human beings. Robots should not 113 pretend to be humans at all. AI can help people solve human problems without assuming a sentient role in society. People building AI can help fellow folks by focusing on problem solving and enhancing productivity. It’s significantly more important for technologists to communicate the benefits of the AI technology itself, rather than focus on examples of robots that do not solve real issues. Using AI and robots to sensationalize the human experience and scaremonger society into believing a robot takeover is an inevitable future makes life harder for everyone [2]. For consumers, it prevents people from truly embracing the increasingly personalized benefits AI can offer to their daily lives. For technologists who work on AI every day, the practice of demonizing and aggrandizing AI advancement severely impedes actual innovation and technical progress. Engineers need to ensure that the AI they create has the ability to learn, discern bias, and avoid making the same mistakes prior to replacing traditionally human-held positions in the workforce and in society, in general. Ultimately, society’s responsibility is not to make AI more human-like, but to make AI that significantly improves human lives. References: 1. The Verge 2017 tech report card: Artificial intelligence and robotics [Electronic resource]. − Mode of access: https://www.theverge.com/2017/12/30/16832164/2017-tech- recap-ai-robots-machine-learning. − Date of access: 22.01.18. 2. Benefits and risks of artificial intelligence [Electronic resource]. – Mode of access: http://www.futureoflife.org. – Date of access: 13.03.2018. 114 УДК 621.039.5=111 Ostreyko A., Matusevich O. Nuclear Power Stations Belarusian National Technical University Minsk, Belarus A nuclear power station (or a nuclear power plant) is a thermal power station in which the heat source (or in other words the main part that produces and releases energy) is a nuclear reactor. As is typical of thermal power stations, heat from a reactor is used to turn water into steam that drives a steam turbine connected to a generator. The latter is used to transform mechanical energy into electrical one. As of 23 April 2014, the International Atomic Energy Agency (IAEA) reports that there are 449 nuclear power reactors in operation situated in 31 countries all over the world [1]. The history of a nuclear reactor began from the discovery of nuclear fission on December 17, 1938. In short, nuclear fission is a process in which the nucleus of an atom under external influence (for example: neutron bombardment) splits into several units, most likely two nuclei and 2-3 neutrons. This process is exothermic, and external energy is released in the form of kinetic energy of those particles. We talk about nuclear chain reaction provided that a neutron released by the one nuclear fission causes another nuclear fission. The main problem of nuclear reaction is controlling. The nature of this process depends on the multiplication factor of a nuclear chain reaction. It is numerically equal to the number of subsequent reactions caused by a single reaction. To make a self-sustainable chain reaction that can be used in nuclear reactors multiplication factor must be equal to 1, otherwise any variations are very critical. 115 Nowadays, there are few types of nuclear power stations that produce energy all over the world. They can be classified by several methods, but most commonly, by the type of a nuclear reactor: 1. Pressurized Water Reactor A pressurized water reactor (also abbreviated as PWR) is the most popular one among modern active nuclear power plants, their number is 292 from total 448 reactors in the world, IAEA data, end of 2015 [1]. PWRs are one of three types of light water reactor (LWR), the other types being boiling water reactors (BWRs) and supercritical water reactors (SCWRs). In a PWR, the primary coolant (water) is pumped under high pressure to the reactor core where it is heated by the energy released by the fission of atoms. The heated water then flows to a steam generator where it transfers its thermal energy to a secondary system where steam is generated and flows to turbines which, in turn, spin an electric generator. In contrast to a BWR, pressure in the primary coolant loop prevents the water from boiling within the reactor. All LWRs use ordinary water as both coolant and neutron moderator [2]. The Biblis Nuclear Power Plant has two PWR power units, with a total capacity of 2,525 MW. Biblis Nuclear Power Plant, Germany 116 2. Boiling Water Reactor The boiling water reactor (BWR) is a type of light water reactor (LWR) used for the generation of electrical power. It is the second most common type of electricity-generating nuclear reactor after the PWR. The main difference between a BWR and PWR is that in a BWR, the reactor core heats water, which turns to steam and then drives a steam turbine. In a PWR, the reactor core heats water which doesn’t boil. This hot water then exchanges heat with a lower pressure water system, which turns to steam and drives the turbine. The BWR was developed by the Idaho National Laboratory and General Electric (GE) in the mid-1950s. The main present manufacturer is GE Hitachi Nuclear Energy, which specializes in the design and construction of the reactor of this type. The Browns Ferry Nuclear Power Plant has four BWR power units, with a total capacity of 3,310 MW. Browns Ferry Nuclear Power Plant, USA 3. Fast Neutron Reactor A fast neutron reactor (a fast-breeder reactor or a breeder reactor) is a nuclear reactor that generates more fissile material than it consumes. These devices achieve this because their 117 neutron economy is high enough to breed more fissile fuel than they use from fertile material, such as uranium-238 or thorium- 232 [3]. Breeders were at first found attractive because their fuel economy was better than LWRs, but interest declined after the 1960s as more uranium reserves were found, and new methods of uranium enrichment reduced fuel costs. There are currently only two commercially operating fast neutron reactors, BN-600 and BN-800, both located in the Beloyarsk Nuclear Power Plant. 4 th power unit of Beloyarsk Nuclear Power Plant, Russia References: 1. Nuclear Power Reactors [Electronic resource]. – Mode of access: http://world-nuclear.org. – Date of access: 02.04.2018. 2. Weart, S.R. The Rise of Nuclear Fear / S.R. Weart. – Cambridge, MA: Harvard University Press, 2012. – 384 p. 3. Sokolov, F. Thorium fuel cycle – Potential benefits and challenges / F. Sokolov, K. Fukuda, H.P. Nawada. – Vienna, International Atomic Energy Agency, 2005. – 105 p. 118 УДК 621.311.25:811.111 Panteley D., Matusevich O. Sahara Forest Project Belarusian National Technical University Minsk, Belarus It’s no longer news that the world’s population is ever- increasing. It’s no longer news that it’s going to be taught to feed the extra mouths. It’s no longer news that climate change is turning fresh water into our planet’s most scarce resource. When we look at the set of problems around sustainable energy, food and water systems the real issue becomes how people manage the footprint on the landscape. We have two choices: keep talking or start acting. Humanity needs to find a way to produce fresh water for biomass production. It’s necessary to find ways to store renewable energy from solar and wind and other renewable sources into biomass so people can sell it all over the world. Imagine the difference it would make, if we could turn our deserts green, if we could use seawater and solar power to make this happen to produce enough food, fresh water and energy to sustain local populations. Imagine we could do all this with technologies that are commercially viable with the potential to be scaled up and implemented around the globe. This might sound like a dream but this is a reality nowadays. And it is called the Sahara Forest Project [1]. In 2009 the SFP was presented in Copenhagen. The first SFP pilot facility in Qatar contains 10,000 square meters of environmental technologies that has never been put together before. The SFP Pilot in Qatar includes: 119 1. Concentrated Solar Power The Sahara Forest Project launch station The SFP demonstrates an innovative greenhouse concentrated solar power (CSP) cooling system which enables the low-cost use of saltwater to achieve wet-cooling efficiencies without utilizing precious freshwater resources. The heat from the CSP mirrors is used to drive a multistage evaporative desalination system for producing distilled water for the plants in the greenhouse and outside. The waste heat is used to warm the greenhouses in the winter and to regenerate the desiccant used for dehumidifying the air. 2. Saltwater Greenhouses Saltwater-cooled greenhouses provide suitable growing conditions that enable year-round cultivation of high-value vegetable crops in the rough Qatar’s desert. The greenhouse- structure consists of 3 bays with polythene roof coverings on the horticultural yield. The cooling system is an evaporative cooler at one end of the greenhouse. The cool air is supplied under the plants via polythene ducts to ensure that the cool air is distributed evenly along the greenhouse and at low level. As 120 the air heats up, it rises and is expelled via high level openings in the end wall. By using saltwater to provide evaporative cooling and humidification, the crops’ water requirements are minimized and yields are maximized with minimal carbon emissions. 3. Outside Vegetation and Evaporative Hedges The water coming from the greenhouse is at a concentration of about 15% salinity. To reduce the water content further, the brine is passed over external vertical evaporators set out in an array to create sheltered and humid environments. These areas are planted to take advantage of the beneficial growing conditions for food and fodder crops and for a wide range of desert species. New candidate species for use as harvested and grazing fodder for livestock, and as bioenergy feedstock, is identified and characterized from among native desert plants. The carbon sequestration benefits of various planting and cropping approaches are measured and compared. 4. Photovoltaic Solar Power The SFP is supported by state photovoltaic (PV) technology. Dust arresting from the surrounding vegetation and water for cleaning the PV-panels ensure an efficient electricity generation. The Sahara Forest Project System 121 5. Salt Production As the water is evaporated from saltwater the salinity increases to the point that the salts precipitate out from the brine. The last stage of this process is taking place in conventional evaporation ponds. 6. Halophytes Beyond traditional horticulture and agriculture, halophytes (salt-loving plant species) are cultivated in saltwater. These hardy plants, often already well adapted to desert conditions, are highly promising sources of fodder and bioenergy feedstocks. Irrigating with saltwater directly into the soil can cause significant environmental harm. So, the SFP implements a variety of up-to-date cultivation techniques. 7. Algae Production Marine algae are one of the most promising future sources of bioenergy. Nutrients with the SFP saltwater- greenhouse infrastructure, mariculture operations, and soil remediation methods are developed. This will not be important for Qatar but for all the region with the same climate. The SHP is really innovative example of tying together all of the aspects of sustainability. The SFP will provides people with a unique opportunity to optimize our technological system and to be large-scale in future. The SHP shows what can be done when great minds think alike and work together without boundaries. It proves that we can take the things we have too much of and use them to produce the things we need more of. Turning the desert green can be done. See it. Believe it. References: 1. Sahara Forest Project [Electronic resource]. – Mode of access: http://www.saharaforestproject.com. – Date of access: 23.01.2018. 122 УДК 620.95=111 Kovtun G., Soloviov S., Matusevich O. Energy Production from Waste Belarusian National Technical University Minsk, Belarus The current irrational use of fossil fuels and the impact of greenhouse gases on the environment are driving research into renewable energy production from organic resources and waste. The global energy demand is high, and most of this energy is produced from fossil resources. Biogas is produced after organic materials (plant and animal products) are broken down by bacteria in an oxygen- free environment, a process called anaerobic digestion (AD). Biogas systems use AD to recycle these organic materials, turning them into biogas, which contains both energy (gas), and valuable soil products (liquids and solids) [1]. After biogas is captured, it can produce heat and electricity for use in engines, microturbines, and fuel cells. Biogas can also be upgraded into biomethane, also called renewable natural gas (RNG), and injected into natural gas pipelines or used as a vehicle fuel. The process of biogas generation is divided into four steps: 1. Preparation of the input material 2. Digestion (fermentation) and other complex chemical reactions 3. Conversion of the biogas to renewable electricity and useful heat with cogeneration / combined heat and power 4. Biogas use for various purposes [2]. Initially the feedstock to the digesters is received in a primary pit or liquid storage tank. From here it is loaded into 123 the digester by various means depending upon the composition of waste materials. In the digestion tanks a series of biological processes are harnessed in order to produce biogas. Hydrolysis is the process where the organic material is solubilized into the digestion liquid. Then it undergoes the intermediate steps of acidogenesis and acetogenesis which create the precursor molecules for methanogenesis. Methanogens feed off these precursors and produce methane as a cellular waste product. The biogas containing this biologically-derived methane is contained and captured in a gas storage tank which is located separately to the main digester, or alternatively can form its roof. The gas storage tank acts as a buffer in order to balance fluctuations in the production of gas in the digesters. The biogas is then converted into renewable power in the form of electricity and heat [3]. Biogas plant In Europe, the production of biogas reached 1.35 × 107 t in 2014. Europe was prompt in applying sustainable waste management. European bodies implemented new research 124 programs to support an alternative-fuels future based on renewable resources. Germany is the leading biogas producer in Europe, with more than 8,000 biogas plants currently in operation, and its biogas amount corresponds to an approximate total electricity capacity of 4 TWh. Swedes recycle nearly 100 % of their household waste. The southern Swedish city of Helsingborg even fitted public waste bins with loudspeakers playing pleasant music. They even have to import waste to have something to burn, to turn waste into energy. In 2014, Sweden even imported 2.7 million tonnes of waste from other countries [4]. Swedish waste sorting system Conclusion Biogas systems are waste management solutions that solve multiple problems and create multiple benefits. References: 1. Clarke Energy [Electronic resource]. – Mode of access: https://www.clarke-energy.com. – Date of access: 20.02.2018. 2. ScienceDirect [Electronic resource]. – Mode of access: https://www.sciencedirect.com. – Date of access: 13.03.2018. 3. Van Foreest, F. Perspectives for Biogas in Europe / F. Van Foreest. – Oxford Institute for Energy Studies, 2012. – 54 p. 4. World Energy Outlook [Electronic resource]. – Mode of access: http://www.iea.org. – Date of access: 27.03.2018. 125 УДК 681.586.773=111 Monich K., Nikitin Y., Matusevich O. Piezoelectricity Belarusian National Technical University Minsk, Belarus What is the future of the energy industry? Firstly, and perhaps most important, we are in danger of ruining the planet’s climate through carbon dioxide emissions. If we continue to use fossil fuels, we may increase the temperature of the planet in ways that will harm our entire ecosystem and us. Secondly, we cannot keep using fossil fuels forever. They will eventually run out, even as the population of Earth grows. For both these reasons, we need to find other sources of energy that do not emit carbon dioxide when used. What is a piezo and how does it work? Piezo derived from the Greek piezein, which means to squeeze or press, is a prefix in piezoelectricity [1]. This term means the charge that accumulates in a solid material (often ceramic) in response to applied mechanical strain. A piezoelectric material has electromechanical interaction between its mechanical and electrical state. When a piezoelectric material is compressed, it creates an electrical field. A piezoelectric disk generates a voltage when deformed 126 The inverse is also true. When a piezoelectric material is subjected to an electrical field, it will change dimensions [1]. In two of Tokyo's busiest stations scientists are using passengers to generate more energy with special flooring tiles installed in front of ticket turnstiles. An average person, weighing 60 kg, will generate only 0.1 W in the single second. When they are covering a large area of floor space and thousands of people are stepping or jumping on them, significant amounts of power are generated [2]. Demonstration experiment at Tokyo Station The English company Pavegen is the global leader in harvesting energy and data from footfall. Its vision is for smarter, more sustainable built environments which empower and connect people. The technology enables people to directly engage with clean energy, to increase their understanding of sustainability issues, and to connect purposefully with brands. Pavegen uses what it calls a hybrid black box technology to convert the energy of a footstep into electricity, which is either stored in a battery or fed directly to devices. These tiles generate electricity with a hybrid solution of mechanisms that include the piezoelectric effect and induction, which uses copper coils and magnets. The marathon runners generated 4.7 kWh of energy, enough to power a five-watt LED bulb for 940 hours, or 40 days. 127 Mechanism of the power-generating floor The company Innowattech demonstrates how Israeil technology can produce electricity from generators installed beneath a road’s asphalt layer. This innovation is based on piezoelectric materials that enable the conversion of mechanical energy exerted by the weight of passing vehicles into electrical energy, without stealing any energy whatsoever from the vehicles. The technology does not increase the vehicle’s fuel intake or affect the road infrastructure [3]. Electro-Kinetic Road Ramp The accumulated energy can be used to power traffic lights or street lamps and in the future could be routed into the grid. The company observes that, Innowattech's solution is capable of producing significant amounts of electricity, about 400 kWh from a 1 km stretch of generators along the dual 128 carriageway [3]. According to official statistics, the current cost for fitting a kilometer (half-mile) of one lane of highway is about $650,000, with a cost of $6,500 per 1 kW. With mass production the price can drop by two thirds, making the system even cheaper than solar energy systems [3]. Innowattech has broadened their energy harvesting designs to generate energy from railways as well as roads. The company has performed a project with the National Railway Company of Israel. Last year preliminary results suggested that areas of railway track that get between 10 and 20 ten-car trains an hour, can produce 120 kW per hour. This is electricity that could be used on the railway itself, or to power the signaling, measure the speed and weight of trains, as well as to transfer it to the grid. The technology of piezoelectricity enables the supply of electricity to various road-side applications, such as traffic lights, billboards, police speed cameras, communication systems, road signs, etc., as well as transfer of the harvested electricity into the electric grid, to supply electricity to households. References: 1. Piezoelectricity [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/Piezoelectricity. – Date of access: 23.01.2018. 2. Energy-Generating Floors to Power Tokyo Subways [Electronic resource]. – Mode of access: https://inhabitat.com/tokyo-subway-stations-get-piezoelectric- floors. – Date of access: 15.03.2018. 3. Energy harvesting roads in Israel [Electronic resource]. – Mode of access: https://www.offgridenergyindependence.com/ articles/1589/energy-harvesting-roads-in-israel. – Date of access: 14.02.2018. 129 УДК 546:811.111 Yaroshevich E. Mileiko A. Chemical Elements Used in Engineering Belarusian National Technical University Minsk, Belarus Engineering uses a large number of different chemical elements, but we will look at the main ones that are used in every field of engineering. Nickel is a chemical element with Ni symbol and atomic number of 28. It is a silvery-white shiny metal with a slight golden tinge. Nickel belongs to the transition metals and it is hard and ductile. Nickel usage has been traced as far back as 3500 BC. Nickel was first taken and classified as a chemical element in 1751 by Axel Fredrik Cronstedt. Major production sites are the Sudbury region in Canada, New Caledonia in the Pacific, and Norilsk in Russia [1]. Brass is a metallic alloy that is made of copper and zinc. The proportions of zinc and copper can vary to create different types of brass alloys with different mechanical and electrical properties. Both bronze and brass may include small amount of other different elements including arsenic, lead, phosphorus, aluminium, manganese, and silicon. Brass has higher malleability than bronze or zinc. Brass is used for decoration due to its bright gold-like appearance and also for applications where low friction is required, for instance: locks, gears, bearings, doorknobs, ammo and valves; for plumbing and electrical applications; and extensively in musical instruments such as horns and bells where a combination of high workability and durability is needed. It is also used in zippers. Brass is often used in issues where it is important that sparks are not struck, such as in tools used near flammable or explosive materials [2]. 130 Copper is a chemical element with symbol Cu and atomic number 29. It is a soft, malleable, and ductile. A freshly exposed surface of pure copper has a reddish-orange color. Historically, copper was the first metal to be worked by people. The discovery that it could be hardened with a little tin to form the alloy bronze gave the name to the Bronze Age. Traditionally it has been one of the metals used to make coins, along with silver and gold. However, it is the most common of the three and therefore the least valued. All US coins are now copper alloys, and gun metals also contain copper. Most copper is used in electrical equipment such as wiring and motors. This is because it conducts both heat and electricity very well, and can be drawn into wires. It also has uses in construction (for example roofing and plumbing), and industrial machinery (such as heat exchangers).Copper is one of few metals that could be found in nature in directly usable metallic form (native metals) [3]. Aluminium or aluminum is a chemical element with symbol Al and atomic number 13. It is a silvery-white, soft, nonmagnetic and ductile metal in the boron group. By mass, aluminium makes up about 8% of the Earth's crust; it is the third most abundant element after oxygen and silicon and the most abundant metal in the crust, though it is less common in the mantle below. Aluminium is remarkable for its low density and its ability to resist corrosion through the phenomenon of passivation. Aluminium and its alloys are vital to the aerospace industry and important in transportation and building industries, such as building facades and window frames. The oxides and sulfates are the most useful compounds of aluminium [4]. Silicon is a chemical element with symbol Si and atomic number 14. A hard and brittle crystalline solid with a blue-grey metallic lustre, it is a tetravalent metalloid and semiconductor. It is a member of group 14 in the periodic table, along with carbon above it and germanium, tin, and lead 131 below. It is rather unreactive, though less so than germanium, and has a very large chemical affinity for oxygen; as such, it was first prepared and characterized in pure form only in 1823 by Jöns Jakob Berzelius. Silicon is the eighth most common element in the universe by mass, but very rarely occurs as the pure element in the Earth's crust. It is most widely distributed in dusts, sands, planetoids, and planets as various forms of silicon dioxide (silica) or silicates. Over 90% of the Earth's crust is composed of silicate minerals, making silicon the second most abundant element in the Earth's crust (about 28% by mass) after oxygen [5]. Lead is a chemical element with symbol Pb and atomic number 82. It is a heavy metal that is denser than most common materials. Lead is soft and malleable, and has a relatively low melting point. When freshly cut, lead is bluish- white; it tarnishes to a dull grey colour when exposed to air. Lead has the highest atomic number of any stable element. metallic lead beads dating back to 7000–6500 BC have been found in Asia Minor and may represent the first example of metal smelting. At that time lead had few applications due to its softness and dull appearance [1]. References: 1. Scerri, E.R. The periodic table: its story and its significance / E.R. Scerri. – Oxford University Press. – 2007. – pp. 239–240. 2. Joseph, R.D. Copper and Zinc Alloys / R.D. Joseph. – ASM International. – 1 January 2001. – p.7. 3. Scott, D.A. Copper and Bronze in Art: Corrosion, Colorants, Conservation / D.A. Scott. – Getty Publications. – 2002. 4. Aluminum. – Encyclopedia Britannica. – 12 March 2012. 5. Voronkov, M.G. Silicon era / M.G. Voronkov. – Russian Journal of Applied Chemistry. – No. 80. – 2007. 132 УДК 669.131.6:811.111 Kachina V. Mileiko A. Grey Cast Iron and White Cast Iron Belarusian National Technical University Minsk, Belarus Cast iron is a family of metals produced by smelting metal, and then pouring it into a mold. The primary difference in production between wrought iron and cast iron is that cast iron is not worked with hammers and tools. There are also differences in composition – cast iron contains 2–4% carbon and other alloys, and 1–3% of silicon, which improves the casting performance of the molten metal. Small amounts of manganese and some impurities like sulfur and phosphorous may also be present. Differences between wrought iron and cast iron can also be found in the details of chemical structure and physical properties [1]. Due to the presence of carbon in cast iron, it may sometimes be confused with steel. However, there are significant differences. Steel contains less than 2% carbon, which enables the final product to solidify in a single microcrystalline structure. The higher carbon content of cast iron means that it solidifies as a heterogeneous alloy, and therefore has more than one microcrystalline structure present in the material. It is the combination of high carbon content, and the presence of silicon, that gives cast iron its excellent castability [1]. The differences between grey cast iron and white cast iron emerge from the composition and the colour of the surface of the material after fracturing. Both of these iron casting alloys mainly contain carbon and silicon, but in different proportions. A key difference between grey cast iron and white 133 cast iron is that after fracturing, white cast iron gives a white coloured crack surface and grey cast iron produces a grey coloured fractured surface. This is basically due to their constituents in the alloy [2]. The most commonly used category of casting alloy is grey cast iron. The composition includes about 2.5% to 4% carbon and 1% to 3% silicon. In the process of making grey cast iron, the proper control of carbon and silicon content and maintaining the proper cooling rate prevents the formation of iron carbide during solidification. This helps to precipitate graphite directly from the melt as regular, commonly elongated and curved flakes in an iron matrix saturated with carbon. When it fractures, the crack path runs through flakes and the fractured surface appears in grey due to graphite present in the material [3]. White cast iron got its name from the white, crystalline crack surface that it imparts after fracturing. In general, most white cast iron materials contain less than 4.3% of carbon and less amount of silicon. This inhibits the precipitation of carbon in the form of graphite. White cast iron is most frequently used in applications, where abrasion resistance is essential and ductility is not very significantly required. Examples are liners for cement mixers, in some drawing dies, ball mills and extrusion nozzles. White cast iron cannot be welded because it is very difficult to accommodate welding-induced stress in the absence of any ductile properties in the base metal. Moreover, the heat affected zone adjacent to the weld may crack during cooling after welding [4]. Mostly, the composition of grey cast iron is about 2.5% to 4.0% of carbon, 1% to 3% of silicon and the remainder balance using iron [5]. Generally, white cast iron mainly contains carbon and silicon, about 1.7% to 4.5% of carbon and 0.5% to 3% of silicon. Also, it may contain trace amounts of sulphur, 134 manganese, and phosphorus [2]. Grey cast iron has a higher compressive strength and high resistance to deformation. Its melting point is relatively low, 1140 ºC to 1200 ºC. It also has a greater resistance to oxidation; therefore, it rusts very slowly and this gives a permanent solution to the problem of corrosion [5]. In white cast iron carbon is present in the form of carbide of iron. It is hard and brittle, has a greater tensile strength and extremely malleable (ability to hammer or press permanently out of shape without breaking or cracking). It also has high compressive strength and excellent wear resistance. It can maintain its hardness for limited periods, even up to a red heat. It cannot be easily cast as other irons because it has a relatively high solidification temperature [2]. The most commonly used areas of grey cast iron are; in internal combustion engine cylinders, pump housings, electrical boxes, valve bodies and decorative castings. It is also used in cooking equipment and brake rotors [5]. White cast iron is most extensively used in crushing, grinding, milling and handling of abrasive materials [2]. References: 1. John, G. A History of Cast Iron in Architecture / G. John, D. Bridgwater. – Allen and Unwin, London. – 2001. 2. Peter, R.L. Disaster on the Dee: Robert Stephenson's Nemesis of 1847 / R.L. Peter. – Tempus. – 2007. 3. Harold, T. Angus, Cast Iron: Physical and Engineering Properties / T. Harold. – Butterworths, London. – 2003. 4. Peter, R.L. Beautiful Railway Bridge of the Silvery Tay: Reinvestigating the Tay Bridge Disaster of 1879 / R.L. Peter. – Tempus. – 2004. 5. George, L. Abrasion-Resistant Cast Iron Handbook / L. George, R. Gundlach, K. Röhrig. – ASM International. – 2000. 135 УДК 621.791.754:811.111 Nazarov D., Mileiko A. Gas Tungsten Arc Welding Belarusian National Technical University Minsk, Belarus Manual gas tungsten arc welding is a relatively difficult welding method, due to the coordination required by the welder. Similar to torch welding, GTAW (see box №1) normally requires two hands, since most applications require that the welder manually feeds a filler metal into the weld area with one hand while manipulating the welding torch in the other [1]. Maintaining a short arc length, while preventing contact between the electrode and the workpiece, is also important. To strike the welding arc, a high frequency generator (similar to a Tesla coil) provides an electric spark. This spark is a conductive path for the welding current through the shielding gas and allows the arc to be initiated while the electrode and the workpiece are separated, typically about 1.5–3 mm (0.06– 0.12 in) apart. Once the arc is struck, the welder moves the torch in a small circle to create a welding pool, the size of which depends on the size of the electrode and the amount of current. While Box №1 GTAW weld area 136 maintaining a constant separation between the electrode and the workpiece, the operator then moves the torch back slightly and tilts it backward about 10–15 degrees from vertical. Filler metal is added manually to the front end of the weld pool as it is needed. Welders wear protective clothing, including light and thin leather gloves and protective long sleeve shirts with high collars, to avoid exposure to strong ultraviolet light. Due to the absence of smoke in GTAW, the electric arc light is not covered by fumes and particulate matter as in stick welding or shielded metal arc welding, and thus is a great deal brighter, subjecting operators to strong ultraviolet light. The welding arc has a different range and strength of UV light wavelengths from sunlight, but the welder is very close to the source and the light intensity is very strong. Potential arc light damage includes accidental flashes to the eye or arc eye and skin damage similar to strong sunburn. Operators wear opaque helmets with dark eye lenses and full head and neck coverage to prevent this exposure to UV light. Modern helmets often feature a liquid crystal-type face plate that self-darkens upon exposure to the bright light of the struck arc. Transparent welding curtains, made of a polyvinyl chloride plastic film, are often used to shield nearby workers and bystanders from exposure to the UV light from the electric arc. Welders often develop a technique of rapidly alternating between moving the torch forward (to advance the weld pool) and adding filler metal. The filler rod is withdrawn from the weld pool each time the electrode advances, but it is always kept inside the gas shield to prevent oxidation of its surface and contamination of the weld. Filler rods composed of metals with a low melting temperature, such as aluminium, require that the operator maintain some distance from the arc while staying inside the gas shield. If held too close to the arc, the filler rod can melt before it makes contact with the weld puddle. As the 137 weld nears completion, the arc current is often gradually reduced to allow the weld crater to solidify and prevent the formation of crater cracks at the end of the weld [2]. Gas tungsten arc welding is most commonly used to weld stainless steel and nonferrous materials, such as aluminium and magnesium, but it can be applied to nearly all metals, with a notable exception being zinc and its alloys. Its applications involving carbon steels are limited not because of process restrictions, but because of the existence of more economical steel welding techniques, such as gas metal arc welding and shielded metal arc welding. Furthermore, GTAW can be performed in a variety of other-than-flat positions, depending on the skill of the welder and the materials being welded [3]. For GTAW of carbon and stainless steels, the selection of a filler material is important to prevent excessive porosity. Oxides on the filler material and workpieces must be removed before welding to prevent contamination, and immediately prior to welding, alcohol or acetone should be used to clean the surface. Preheating is generally not necessary for mild steels less than one inch thick, but low alloy steels may require preheating to slow the cooling process and prevent the formation of martensite in the heat-affected zone. Tool steels should also be preheated to prevent cracking in the heat- affected zone. Austenitic stainless steels do not require preheating, but martensitic and ferritic chromium stainless steels do [4]. Welding dissimilar metals often introduces new difficulties to GTAW welding, because most materials do not easily fuse to form a strong bond. However, welds of dissimilar materials have numerous applications in manufacturing, repair work, and the prevention of corrosion and oxidation. In some joints, a compatible filler metal is chosen to help form the bond, and this filler metal can be the same as one of the base materials (for example, using a stainless steel filler metal with 138 stainless steel and carbon steel as base materials), or a different metal (such as the use of a nickel filler metal for joining steel and cast iron). Very different materials may be coated or buttered with a material compatible with a particular filler metal, and then welded. In addition, GTAW can be used in cladding or overlaying dissimilar materials. When welding dissimilar metals, the joint must have an accurate fit, with proper gap dimensions and bevel angles. Care should be taken to avoid melting excessive base material. Pulsed current is particularly useful for these applications, as it helps limit the heat input. The filler metal should be added quickly, and a large weld pool should be avoided to prevent dilution of the base materials. Welders are also often exposed to dangerous gases and particulate matter. While the process doesn't produce smoke, the brightness of the arc in GTAW can break down surrounding air to form ozone and nitric oxides. The ozone and nitric oxides react with lung tissue and moisture to create nitric acid and ozone burn. Ozone and nitric oxide levels are moderate, but exposure duration, repeated exposure, and the quality and quantity of fume extraction, and air change in the room must be monitored. Welders who do not work safely can contract emphysema and oedema of the lungs, which can lead to early death. Similarly, the heat from the arc can cause poisonous fumes to form from cleaning and degreasing materials. Cleaning operations using these agents should not be performed near the site of welding, and proper ventilation is necessary to protect the welder. References: 1. American Welding Society. – Welding handbook, welding processes. – Part 1. Miami Florida: American Welding Society. – 2004. 139 2. Cary, H.B. Modern welding technology / H.B. Cary, S.C. Helzer. – Upper Saddle River. – New Jersey: Pearson Education. – 2005. 3. Jeffus, L.F. Welding: Principles and Applications / L.F. Jeffus. – Fourth edition. – Thomson Delmar. – 1997. 4. Tungsten Selection [Electronic resource]. – Mode of access: www.Arc-Zone.com. – Date of access: 03.03.2018. 140 УДК 004.032.26:811.111 Aristova D., Molchan O. Facial Recognition Using Convolutional Neural Networks Belarusian National Technical University Minsk, Belarus The best results in the field of facial recognition were shown by Convolutional Neural Network or CNN which is logical development of ideas of such architecture of NN as kognitrona and neokognitrona. Testing of CNN on the ORL database containing images of persons with little changes of lighting, scale, space turns, situation and different emotions has shown 96% recognition accuracy. CNN was primary used by DeepFace in Facebook for facial recognition of the users of the social network [1]. How does CNN recognize faces? It consists of 4 major steps. The first step is face detection. Face detection went mainstream in the early 2000’s when Paul Viola and Michael Jones invented a way to detect faces that was fast enough to run on cheap cameras. But nowadays there are much more reliable solutions. We’re going to use a method invented in 2005 called Histogram of Oriented Gradients  – or just HOG for short [2]. We’ll look at every single pixel in our image one at a time. Our goal is to figure out how dark the current pixel is comparing to the pixels that surround it. Then we want to draw an arrow showing in which direction the image is getting darker. But saving the gradient for every single pixel gives us way too much information. To do this, we should break up the image into small squares of 16 by 16 pixels each. Then we’ll replace that square in the image with the arrow directions that were the strongest. 141 The second step is posing and projecting faces. We are going to use the algorithm called face landmark estimation. The basic idea is that we should identify 68 specific points (called landmarks) that exist on every face. Then we will train a machine to find these 68 specific points on any face. The third step is encoding faces. It turns out that the obvious measurements to us (like eye color) don’t really make sense to a computer. Researchers have discovered that the most accurate approach is to let the computer figure out the measurements to collect. Deep learning does a better job than humans at figuring out which parts of a face are important to measure. The last step is finding the person’s name from the encoding. All we have to do is to find a person in our database of known people who has the closest measurements to our test image. You can do that by using any basic machine learning classification algorithm [3]. References: 1. Achievements in deep training over the past year [Electronic resource]. – Mode of access: https://habrahabr.ru/company/mailru/blog/338248/. – Date of access: 28.03.2018. 2. Modern Face Recognition with Deep Learning [Electronic resource]. – Mode of access: https://medium.com/@ageitgey/machine-learning-is-fun-part-4- modern-face-recognition-with-deep-learning-c3cffc121d78. – Date of access: 28.03.2018. 3. Analysis of existing approaches to face recognition [Electronic resource]. – Mode of access: https://habrahabr.ru/company/synesis/blog/238129/. – Date of access: 28.03.2018. 142 УДК 004.383.8:811.111 Golubev A., Molchan O. Machine Learning and Genetic Algorithms Belarusian National Technical University Minsk, Belarus Machine learning algorithms find natural patterns in data that generate insight and predict the unknown for better decisions. Machine learning algorithms use computational methods to learn information directly from data without relying on a predetermined equation as a model. The algorithms adaptively improve their performance as the number of samples available for learning increases. Supervised learning finds patterns (and develops predictive models) using both, input data and output data. All supervised learning techniques area is formed either by classification or by regression. Classification is used for predicting discrete responses and regression is used for predicting continuous responses. Unsupervised learning finds patterns based only on input data. This technique is useful when you are not quite sure what you are looking for. It is often used for exploratory analysis of raw data. Most unsupervised learning techniques are forms of Cluster Analysis. In Cluster Analysis data items that have some measure of similarity based on characteristic values are grouped [1]. Genetic algorithms (GA) were invented to mimic some of the processes observed in natural evolution. Many people, biologists included, are astonished that life at the level of complexity that we observe could have evolved in the relatively short time suggested by the fossil record. The idea with GA is to use this power of evolution to solve optimization 143 problems. The father of the original Genetic Algorithm was John Holland who invented it in the early 1970’s [2]. Genetic Algorithms have a wide range of applications. Voice recognition systems such as Siri use machine learning and deep neural networks to imitate human interaction. Siri can identify the trigger phrase Hey Siri under almost any condition through the use of probability distributions. By selecting appropriate speech segments from a recorded database, the software can then choose responses that closely resemble real-life conversation. Google introduced machine learning to Google Maps in 2017, improving the usability of the service. These deep learning algorithms help the app extract street names and house numbers from photos taken by Street View cars and increase the accuracy of search results. With over 80 billion hi- resolution photos collected by Street view cars, analyzing these images by hand would have been extremely time-consuming. Machine learning frees up more time for Google engineers, automatically extracting information from geo-located images and achieving an accuracy rate of 84.2 percent for some of France’s most convoluted street signs. In 2015, Google introduced a smart reply function to Gmail to help users tackle their inbox. The smart reply function is based on two recurrent neural networks: one used to encode incoming mail, the other used to predict possible responses. These networks work in tandem to decipher the meaning behind the incoming message and to automatically suggest three different responses for each. PayPal uses machine learning algorithms to detect and combat fraud. By implementing deep learning techniques, PayPal can analyze vast quantities of customer data and evaluate risk in a far more efficient manner. Traditionally, fraud detection algorithms have dealt with very linear results: fraud either has or hasn’t occurred. But with machine learning 144 and neural networks, PayPal is able to draw upon financial, machine, and network information to provide a deeper understanding of a customer’s activity and motives [3]. The focus will be on making systems that perform specific tasks become our personal assistants. They could help us reduce energy usage by making better use of resources and improve care for the elderly by finding more time for meaningful human contact. Many industries could turn to algorithms to increase productivity. Financial services could become fully automated. Over the next 10 years machine learning technologies will increasingly become an indispensable part of people’s lives, transforming the way they work and live. References: 1. Machine Learning for dummies [Electronic resource]. – Mode of access: https://becominghuman.ai/machine-learning- for-dummies-explained-in-2-mins-e83fbc55ac6d. – Date of access: 16.03.2018. 2. 10 Real-World Examples of Machine Learning and AI [Electronic resource]. – Mode of access: https://www.redpixie.com/blog/examples-of-machine-learning. – Date of access: 26.03.2018. 3. Genetic Algorithms [Electronic resource]. – Mode of access: https://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol1/hmw/ar ticle1.html. – Date of access: 16.03.2018. 145 УДК 004.35:811.111 Korotkevich V., Molchan O. The Internet of Things Belarusian National Technical University Minsk, Belarus The Internet of Things (IoT) is sensors and actuators embedded in physical objects and linked through wired and wireless networks, often using the same Internet Protocol (IP) that connects the Internet [1]. There are three fundamental components that combine to form an IoT node: intelligence, sensing, and wireless communications. The IoT embedded platforms can include sensors like infrared ones, accelerometers and gyroscopes to detect and gather information on real-world objects [2]. Nevertheless there are 2 key challenges: 1) The technology itself. Engineers pushed to leverage the size of micro-electromechanical systems (MEMS) but it seems to be impossible. 2) Most of MEMS comes from smartphones segment. But the IoT world is very different, characterized by a highly fragmented structure of competing technological platforms [3]. There are 3 main options for wireless communication: 1) ZigBee. It is a low-power wireless network that was involved in industrial and building automation. A novel aspect of ZigBee is mesh networking. 2) BLE. A key advantage of BLE is a support of the original Bluetooth, which makes it more robust than ZigBee. 3) Wi-Fi. It is predominant communication technology because it offers the best power-per-bit efficiency. However, power consumption is high [4]. 146 Concerns have been raised that the IoT is being developed rapidly without appropriate consideration of the profound security challenges involved. In fact, there are three major challenges that we cannot ignore: ubiquitous data collection, unexpected uses of data, heightened security risks [4]. Most of the technical security issues are similar to servers, workstations and smartphones, but the firewall, security update and anti-malware systems used for those are generally unsuitable for the much smaller, less capable IoT devices. Without adequate security, intruders can break into IoT systems and networks, accessing potentially sensitive personal information about users, and using vulnerable devices to attack local networks and devices. A significant amount of work has already been done in the EU and USA. There will be stronger regulation for companies developing systems that process personal data to protect security and privacy. Also they can use access control measures and encrypt data. The IoT and blockchain are two topics which are causing a great deal of hype in the technology circle. The idea that putting them together could result in something even greater than the sum of its parts. For instance, blockchain can be used to track the sensor and prevent duplication with any other malicious data [5]. The other possible applications of the IoT are: healthcare, buildings and utilities. References: 1. IoT Analytics [Electronic resource]. – Mode of access: https://iot-analytics.com/internet- of-things- definition/. – Date of access: 21.02.2018. 2. IoT [Electronic resource]. – Mode of access: https://www.lanner-america.com/knowledgebase/IoT/. – Date of access: 04.03.2018. 147 3. Smart Sensors Fulfilling the Promise of the IoT [Electronic resource]. – Mode of access: https://www.sensorsmag.com/components/smart-sensors- fulfilling-promise-iot. – Date of access: 13.03.2018. 4. The fundamental components of the Internet of Things [Electronic resource]. – Mode of access: https://www.electronicsworld.co.uk/news/advertorial s/5022-the-fundamental-components-of-the-internet-of-things. – Date of access: 14.03.2018. 5. Blockchain and the Internet of Things: 4 Important Benefits of Combining These Two Mega Trends [Electronic resource]. – Mode of access: https://www.forbes.com/sites/bernardmarr/2018/01/28/blockch ain-and-the-internet-of-things-4-important-benefits-of- combining-these-two-mega-trends/#5d90d1d019e7. – Date of access: 16.03.2018. 148 УДК 004.738.1.056:811.111 Kosyakova D., Molchan O. What Is HTTPS and What Does It Do? Belarusian National Technical University Minsk, Belarus HTTP was originally proposed by Tim Berners-Lee, who designed the application protocol in mind to perform high-level data communication functions between Web-servers and clients. It takes the well-known and understood HTTP protocol, and simply layers a SSL/TLS encryption layer on top of it. Servers and clients still speak exactly the same HTTP to each other, but over a secure SSL connection that encrypts and decrypts their requests and responses. The SSL layer has 2 main purposes: verifying that you are talking directly to the server that you think you are talking to; ensuring that only the server can read what you send it and only you can read what it sends back. The really clever part is that anyone can intercept every single one of the messages you exchange with a server, including the ones where you are agreeing on the key and encryption strategy to use, and still not be able to read any of the actual data you send. Let’s have a closer look at how an SSL connection is established. An SSL connection between a client and server is set up by a handshake, the goals of which is to agree on a cipher suite; to agree on any necessary keys for this algorithm. Once the connection is established, both parties can use the agreed algorithm and keys to securely send messages to each other. We will break the handshake up into 3 main phases – Hello, Certificate Exchange and Key Exchange: 149 1. Hello – The handshake begins with the client sending a ClientHello message. This contains all the information the server needs in order to connect to the client via SSL, including the various cipher suites and maximum SSL version that it supports. The server responds with a ServerHello, which contains similar information required by the client, including a decision based on the client’s preferences about which cipher suite and version of SSL will be used [1]. 2. Certificate Exchange – When you request a HTTPS connection to a webpage, the website will initially send its SSL certificate to your browser. This certificate contains the public key needed to begin the secure session. Based on this initial exchange, your browser and the website then initiate the SSL handshake. The SSL handshake involves the generation of shared secrets to establish a uniquely secure connection between yourself and the website. When a trusted SSL Digital Certificate is used during a HTTPS connection, users will see a padlock icon in the browser address bar. When an Extended Validation Certificate is installed on a web site, the address bar will turn green. All communications sent over regular HTTP connections are in plain text and can be read by any hacker that manages to break into the connection between your browser and the website. This presents a clear danger if the communication is on an order form and includes your credit card details or social security number. With a HTTPS connection, all communications are securely encrypted. This means that even if somebody managed to break into the connection, they would not be able decrypt any of the data which passes between you and the website [2]. 3. Key Exchange – The encryption of the actual message data exchanged by the client and server will be done using a symmetric algorithm, the exact nature of which was already 150 agreed during the Hello phase. A symmetric algorithm uses a single key for both encryption and decryption, in contrast to asymmetric algorithms that require a public/private key pair. Both parties need to agree on this single, symmetric key, a process that is accomplished securely using asymmetric encryption and the server’s public/private keys. The client generates a random key to be used for the main, symmetric algorithm. It encrypts it using an algorithm also agreed upon during the Hello phase, and the server’s public key (found on its SSL certificate). It sends this encrypted key to the server, where it is decrypted using the server’s private key, and the interesting parts of the handshake are complete. The parties are happy that they are talking to the right person, and have secretly agreed on a key to symmetrically encrypt the data that they are about to send each other. HTTP requests can now be sent by forming a plaintext message and then encrypting and sending it. The other party is the only one who knows how to decrypt this message, and so cybercriminals are unable to read or modify any requests that they may intercept [1]. References: 1. How does HTTPS actually work? [Electronic resource]. – Mode of access: https://robertheaton.com/2014/03/27/how- does-https-actually-work/. – Date of access: 19.03.2018. 2. What is HTTPS? [Electronic resource]. – Mode of access: https://www.instantssl.com/ssl-certificate-products/https.html. – Date of access: 19.03.2018. 151 УДК 519.17:811.111 Poleshchuk E., Molchan O. Computer Graphics Belarusian National Technical University Minsk, Belarus The term Computer Graphics was coined in 1960 by Will iam Fetter, a designer for Boeing. Computer Graphics is the technology with which pictures in the general sense are generated or managed, displayed, and processed in an application-oriented manner by means of computers, and with which pictures are also correlated with non-graphical application data. The term computer graphics also implies the computer-aided integration and handling of these pictures synchronized with other data types. Today computer graphics is already the basic technology for visualization and implementing interactive graphics dialogues for design and engineering applications (CAD, CAE, CAM, CIM, etc.), for printing, publishing, and office applications, or media and visual communication, for geographical information systems (GIS), and for architecture or civil engineering applications [1]. Computer graphics is any type of images created using any kind of computer. There is a vast amount of types of images a computer can create. Further advances in computing led to greater advancements in interactive computer graphics. In 1959, the TX-2 computer was developed at MIT’s Lincoln Laboratory. The TX-2 integrated a number of new man- machine interfaces. A light pen could be used to draw sketches on the computer using Ivan Sutherland’s revolutionary Sketchpad software [2]. Using a light pen, Sketchpad allowed one to draw simple shapes on the computer screen, save them and even recall them later. The light pen itself had a small 152 photoelectric cell in its tip. This cell emitted an electronic pulse whenever it was placed in front of a computer screen and the screen's electron gun fired directly at it. By simply timing the electronic pulse with the current location of the electron gun, it was easy to pinpoint exactly where the pen was on the screen at any given moment. Once that was determined, the computer could then draw a cursor at that location. Sutherland seemed to find the perfect solution for many of the graphics problems he faced. Even today, many standards of computer graphics interfaces got their start with this early Sketchpad program. One example of this is in drawing constraints. If one wants to draw a square for example, he or she does not have to worry about drawing four lines perfectly to form the edges of the box. One can simply specify that he wants to draw a box, and then specify the location and size of the box. The software will then construct a perfect box, with the right dimensions and at the right location. Another example is that Sutherland’s software modeled objects – not just a picture of objects. In other words, with a model of a car, one could change the size of the tires without affecting the rest of the car. It could stretch the body of the car without deforming the tires. All computer art is digital, but there are two very different ways of drawing digital images on a computer screen, known as raster and vector graphics. Simple computer graphic programs like Microsoft Paint and PaintShop Pro are based on raster graphics, while more sophisticated programs such as CorelDRAW, AutoCAD, and Adobe Illustrator use vector graphics. So what exactly is the difference? Raster graphics are digital images created or captured as a set of samples of a given space. A raster is a grid of x and y coordinates on a display space (and for three-dimensional images, a z coordinate). A raster image file identifies which of these coordinates to illuminate in monochrome or color values. 153 The raster file is sometimes referred to as a bitmap because it contains information that is directly mapped to the display grid. There’s an alternative method of computer graphics that gets around the problems of raster graphics. Instead of building up a picture out of pixels, you draw it a bit like a child would by using simple straight and curved lines called vectors or basic shapes (circles, curves, triangles, and so on) known as primitives. Staring at the screen, a vector-graphic picture still seems to be drawn out of pixels, but now the pixels are precisely related to one another – they’re points along the various lines or other shapes you’ve drawn. Drawing with straight lines and curves instead of individual dots means you can produce an image more quickly and store it with less information. It’s also much easier to scale a vector-graphic image up and down by applying mathematical formulas called algorithms that transform the vectors from which your image is drawn. A raster file is usually larger than a vector graphics image file. A raster file is usually difficult to modify without loss of information, although there are software tools that can convert a raster file into a vector file for refinement and changes. Examples of raster image file types are: BMP, TIFF, GIF, and JPEG files. CG can be represented by 2D image or 3D model. 3D computer graphics is different from 2D computer graphics in that a three-dimensional representation of geometric data is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing. 3D modeling is the process of preparing geometric data for 3D computer graphics, and is akin to sculpting or photography, whereas the art of 2D graphics is analogous to painting. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer graphics. 3D computer graphics in contrast to 2D computer graphics are graphics that use a three-dimensional 154 representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D rendering techniques [3]. Computer graphics is also divided into interactive and non-interactive. In non-interactive computer graphics otherwise known as passive computer graphics, the observer has no control over the image. Familiar examples of this type of computer graphics include the titles shown on TV and other forms of computer art. Interactive Computer Graphics involves a two way communication between computer and user. Interactive computer graphics affects our lives in a number of indirect ways. For example, it helps to train the pilots of our airplanes. We can create a flight simulator which may help the pilots to get trained not in a real aircraft but on the grounds at the control of the flight simulator. The flight simulator is a mock up of an aircraft flight deck, containing all the usual controls and surrounded by screens on which we have the projected computer generated views of the terrain visible on take off and landing. Flight simulators have many advantages over the real aircrafts for training purposes, including fuel savings, safety, and the ability to familiarize the trainee with a large number of the world’s airports. And that’s really the key point about computer graphics: they turn complex computer science into everyday art we can all grasp, instantly and intuitively. Virtually every modern computer now has what’s called a GUI (graphical user interface), which means you operate the machine by pointing at things you want, clicking on them with your mouse or your finger, or dragging them around your desktop. That’s why a picture really is worth a thousand words (sometimes many more) and why computers that help us visualize things with 155 computer graphics have truly revolutionized the way we see the world [4]. References: 1. Computer graphics [Electronic resource]. – Mode of access: http://www.newworldencyclopedia.org/entry/Computer_graphi cs. – Date of access: 13.03.2018. 2. Computer graphics [Electronic resource]. – Mode of access: http://www.explainthatstuff.com/computer-graphics.html. – Date of access: 14.03.2018. 3. Computer graphics [Electronic resource]. – Mode of access: http://ecomputernotes.com/computer-graphics/basic-of- computer- graphics/introduction-to-computer-graphics. – Date of access: 15.03.2018. 4. Computer graphics [Electronic resource]. – Mode of access: http://graphics.wikia.com/wiki/Computer_graphics. – Date of access: 14.03.2018. 156 УДК 004.896:811.111 Rosetskaya A., Murauyeva A. Artificial Intelligence Technology Belarusian National Technical University Minsk, Belarus Artificial Intelligence (AI) is a game-changing technology. It has the potential to transform the world. Companies are now significantly making investments in AI to boost their future businesses. Here are some examples of artificial intelligence. Roxxter Cleaning Robot. Roxxter is a powerful little helper with intelligent navigation software. The robotic vacuum cleaner scans its environment and creates its own map of the entire home. As well as displaying this information to the user in the Home Connect app, the map is also interactive. The RoomSelect function enables the user to select and activate individual rooms on the digital map. If certain areas are not to be cleaned, these can be marked as no-go zones on the map [1]. To achieve the best cleaning results, the motor is located directly on the brush and always provides a strong performance thanks to powerful lithium ion batteries. What’s more, this robot has an integrated camera. So, you always know what happens in your apartment or in case you have paranoia about whether you have closed the door or fed the dog, you can relax. And another feature is that Roxxter is currently the only robotic vacuum cleaner that can be started, stopped and even sent to specific rooms via Amazon’s cloud-based voice control service, Alexa. We go further: next robot is for entertainment. It’s called Anki’s Cozmo. Cozmo is a small, programmable robot with its own personality. It communicates with its eyes, 157 movements and sounds. Cozmo can recognize and remember people using a built-in OLED camera: just add in your name and let it stare at you for a few seconds. It'll get excited and say your name back to you in a childlike way. And the next time it sees you, it will recognize you [2]. Users can play with it through its app and interactive cubes that come in the pack. It also allows you to learn programming in a simple and friendly way. In the app, available for Android, IOS and FireOS, you will find some games and skills that Cozmo will help you develop the more you play with it. The app also has a Code Lab that allows you to access the robot’s functionality and program it with Scratch. The next robot will bring more pleasure to your life. It’s Emotech Olly – robot with evolving personality. This is the result of the robot’s deep-learning capabilities that mean as your Olly gets to know you and your daily routines over time, it will evolve to become more like you and respond to the patterns of your life [3]. Apart is its ability to analyze vocal patterns and adjust accordingly. Olly will offer follow- up information and even suggestions based on recent interactions you’ve had with it, and it’ll learn to predict behaviors to do stuff like turning on your favorite song each morning as you’re getting ready for the day [4]. AI in Biometrics. Biometrics is the measurement and statistical analysis of people’s unique physical and behavioral characteristics. The technology is mainly used for identification and access control. There are many types of Biometrics and we’d like to mention some of them. The first two types of biometric identification and recognition solutions are physical and behavioral biometrics. Physical biometric solutions use distinctive and measurable characteristics of particular parts of the human body, such as a person’s face, iris, DNA, vein, fingerprints, etc., and transform this information into a code understandable by the 158 AI system. Behavioral biometric solutions operate in a similar way, except they use unique behavioral characteristics, such as a person’s typing rhythm, way of interaction with devices, gait, voice, etc. [5]. In smartphones, for example, the technology builds a personal profile from the size of a user’s fingers, the pressure applied when tapping the screen, where fingers are placed on the screen, the swipe speed, how the device is moved and many other factors. Fingerprint Recognition. Fingerprint recognition is one of the most well-known biometrics, and it is by far the most used biometric solution for authentication on computerized systems. Most fingerprint biometric solutions look for specific features of a fingerprint, such as the ridge line patterns on the finger, the valleys between the ridges. In order to get a fingerprint match for verification or authorization, biometric systems must find a sufficient number of minutiae patterns. This number varies across systems. Voice Recognition software gives you the ability to streamline your workflow. Well-designed voice recognition software can help you dramatically increase productivity both at work and at home. You can dictate a document at roughly three times the speed of typing it. And with the right software, you can do so with even more accuracy. What’s more, customized voice commands allow for hands-free dictation. That means that you can not only dictate the text to the computer, but say it to open and close a needed file. Even you can send email only by your voice. The great feature of these programs is that they adapt to you. By learning the words and phrases you use the most, the programs get better at dictating your messages over time [6]. With this software, you don’t even have to be in front of the computer screen to create documents. Voice recognition apps also eliminate the need for holding a phone. Busy parents who need to send 159 emails while using their hands for other tasks can benefit from this software. This software can also be a boon to people with disabilities or injuries that restrict keyboard and mouse use. A facial recognition biometric system identifies and verifies a person by extracting and comparing selected facial features from a digital image or a video frame to a face database. For example, an algorithm may analyze the distance between the eyes, the width of the nose, etc., and encode the corresponding data as face prints, which can then be used to find appropriate matches in a destination database [5]. References: 1. Mode of access: https://www.applianceretailer.com.au/2017/09/ifa-2017-bosch- debuts-first-robotic-vacuum/. – Date of access: 15.03.2018. 2. Mode of access: https://www.techradar.com/reviews/anki- cozmo. – Date of access: 15.03.2018. 3. Mode of access: https://www.cnet.com/products/emotech- olly/preview/. – Date of access: 20.03.2018. 4. Mode of access: https://www.digitaltrends.com/home/emotech-olly-speaker- with-personality/. – Date of access: 22.03.2018. 5. Mode of access: https://www.techemergence.com/ai-in- biometrics-current-business-applications/. – Date of access: 16.03.2018. 6. Mode of access: https://www.business.com/categories/best- voice-recognition-software/. – Date of access: 15.03.2018. 160 УДК 681.7.062:811.111 Stanilko M., Linkevich M., Murauyeva A. Interactive Mirrors Belarusian National Technical University Minsk, Belarus Mirror, mirror on the wall, who’s the smartest of them all? We all remember this fairy tale about the magic mirror. Maybe you even wanted to have one of your own. There’s no doubt that all girls dreamt about it. Now it’s not a dream: the thing that you take for granted – that piece of glass that you look into every day – is also getting smarter, just like everything else in your house. New modern mirrors are called interactive mirrors or smart mirrors. An interactive mirror consists of HD, Mirrorvision optical glass, plus intouch 6-point multi-touch. Interactive mirrors are a frameless solution, ready to integrate into both modern and traditional interiors. They are perfect for fitting rooms, exhibitions, branded merchandising themes, car showrooms, etc. Here we would like to introduce a few interesting examples of this technology. Hunting for clothes in a busy shop can be a nightmare, but shopping online can be a failure. This has led to a number of companies coming up with a compromise – interactive mirrors in shops that let you virtually try on different outfits, explore colours and patterns, and even order food. The latest to be rolled out in the US is the MemoryMirror (MemoMi) that uses augmented reality to show how clothes will fit, and lets shoppers change outfits with the swipe of a hand [1]. MemoMi uses Intel integrated graphics technology to create avatars of the shopper wearing various clothing. Using hand gestures, shoppers can scroll through different colours, patterns and sizes, and the smart mirror remembers previous outfit choices, so the shopper can compare 161 and contrast. ABYSS GLASS mirror, for example, will tell clients what are the must-have items of the season, what are the best offers and promotions [2]. The HiMirror acts as a daily beauty consultant: it actually uses its built-in smarts to tell you something different, like the condition of your skin, which can be used to build a daily skincare regimen, track results, and see improvement over time. HiMirror was developed under the supervision of professional consultants in the fields of dermatology, cosmetics (applications and raw materials), skincare, and medical beauty, making HiMirror a quality personal skincare consultant you can trust [3]. By taking a makeup-free photo with the HiMirror’s integrated high- resolution camera, the device’s proprietary technology analyzes dark spots, red spots, dark eye circles, wrinkles, pores, fine lines, and other complexion elements. From these, it creates a personalized Skin Index Synthesis report, which reports on skin firmness, brightness, texture, clarity, and overall healthiness [4]. HiMirror includes some of the standard smart mirror features, like displaying the local weather, syncing to your Google calendar or playing Spotify. It even includes facial and voice-recognition capabilities, which allows more than one family member to reap the benefits of automated skin analysis. Did we mention that there is a virtual makeup feature to preview how you’ll look with the makeup? The mirror knows what colours fit you best, from foundations to lipsticks. Afterwards, the mirror reviews the makeup products used to achieve the perfect makeup look. Beauty journey has never been this simple and exciting. It also guides you through beauty routines thanks to a series of tutorial videos. Even Snow White’s wicked stepmother wouldn’t have thought that one day the mirror could tell her such things! Competition gets more intense with the MirroCool, which promises a smart mirror incorporating facial and gesture recognition technology. It can recognize up to 70 distinctive 162 facial positions and use them to complete the task you name, without any taps, swipes or verbal instructions. This 60 centimetres by 80 centimetres smart mirror comes in a water- resistant bathroom version and another for the hallway in four colour choices. When switched off, it looks just like another mirror, but once activated it can sync up with your schedule on Google Calendar, iCloud or Office 365. Just stand in front of it and it gives you medical reminders, traffic reports, weather updates and calendar alerts. MirroCool is a security device as well. The integrated Face Recognition technology keeps track of every family member in the house, as well as people whose profiles you have provided. When it sees someone it does not know, it will alert you or sound the alarm. You can also make live checks of your home through the designated mobile app [5]. References: 1. Mode of access: http://www.dailymail.co.uk/sciencetech/article-2906563/The- end-fitting-room-queues-Smart-mirrors-lets-virtually-try- clothes-order-drinks.html. – Date of access: 15.03.2018. 2. Mode of access: https://abyssglass.com/pdf/AG_ENG.pdf. – Date of access: 15.03.2018. 3. Mode of access: https://www.himirror.com/eshop/us_en/product/himirrorrc. – Date of access: 22.03.2018. 4. Mode of access: https://www.digitaltrends.com/home/himirror-smart-mirror- announcement/. – Date of access: 20.03.2018. 5. Mode of access: http://www.scmp.com/native/lifestyle/topics/premier- living/article/2123750/you-talkin-me-smart-mirrors-do-talk- back-and. – Date of access: 20.03.2018. 163 УДК 629.113.001 Bulatovsky V., Pedko L. Four-Wheel Steering System Belarusian National Technical University Minsk, Belarus Four-wheel steering (4WS) is an advanced control technique which can improve steering characteristics. Compared with traditional two-wheel steering (2WS), four- wheel steering systems steer the front wheels and rear wheels individually when cornering, according to vehicle motion states: speed, yaw velocity and lateral acceleration [1]. Four- wheel steering can enhance handling stability, improve the active safety for a vehicle, and allow a vehicle to turn in a significantly smaller turning radius. When a vehicle enters a curved path, the rear wheels first steer in the opposite direction of the front wheels in order to generate sufficient yaw motion. Then, the rear wheels synchronize with the front wheel to keep the desired yaw rate value and to control the lateral motion for path tracking [2]. The lateral motion in the y-axis of an automotive vehicle is considered when analyzing steering systems. Lateral motion of the automotive vehicle implies how the vehicle responds to steering input. A human driver (HD) controls the lateral dynamics of a vehicle by indirectly affecting the forces generated by the wheels of the vehicle [3]. These forces are influenced by many systems, including the steering system of an automotive vehicle. The response of the automotive vehicle to steering input is predominantly influenced by a steer-by-wire (SBW) all- wheel-steered (AWS) conversion mechatronic control system. Conventionally, vehicle steering systems are used to control 164 the lateral motion of the vehicle [4]. Research and development (RD) on this subject is broken down along the following lines; RD work on active front-wheel steering (FWS), active rear- wheel steering (RWS) and all-wheel steering (AWS) systems [5]. Specifically, this publication focuses on the SBW four- wheel-steered (4WS) conversion mechatronic controller that influences the wheels direction in different modes, as shown in Figure 1. Figure 1 – Four-Wheel Steering System Four-wheel steering (4WS) systems control both front and rear steering angles as a function of driver input and vehicle dynamics. The front-wheel steering (FWS) controller alters the direction of the front wheels as a function of the drivers input with or without a mechanical link. Active FWS provides an electronically controlled superposition at an angle 165 to the steering wheel angle. Active FWS optimizes features such as steering comfort, effort, and steering dynamics. However, the rear-wheel steering (RWS) controller does not influence the front-steering angle (this task is left to the driver) but rather affects the vehicle dynamics by adjusting the steering angle of the rear wheels. For vehicles operating under normal operation circumstances, controlling lateral dynamics using a SBW 4WS conversion mechatronic control system is desirable; here the front and rear steering angles are the two control inputs. References: 1. Lohith, K. Development of four wheel steering for a car / K. Lohith, S.R. Shankapal, M.H. Monish Gowda. – Sastech Journal. – Volume 12. – Issue 1, April 2015. 2. Pushkin, G. Selectable all wheel steering for an ATV / G. Pushkin. – International Journal of Engineering Research & Technology (IJERT). – Vol. 4. – Issue 08. – August 2015. 3. Singh, A. Study of 4 wheel steering systems to reduce turning radius and increase stability / A. Singh, A. Kumar, R. Chaudhary, R.C. Singh. – International conference of advance research and innovation (ICARI-2014). 4. Singh, A. Mechanically actuated active four wheel steering system / A. Singh, A.K. Sharma, A. Singh, S. Alim. – International Journal of Advance Research in Science and Engineering. – Volume 5. – Issue 05. – May 2016. 5. RiyazHajaMohideen, S. Three mode steering system for light weight automobile vehicles / S. RiyazHajaMohideen. – International Journal of Science Research Engineering and Technology (IJSRET). – Volume 5. – Issue 3. – March 2016. 166 УДК 629.3:006.83.063 Savenkov A., Pedko L. The Procedure of Vehicle Certification in Belarus Belarusian National Technical University Minsk, Belarus Certification is the action of a third party certifying that a properly identified product, process or service with a certain degree of certainty meets the requirements of the relevant VAT. The certification of vehicles in Belarus takes place in accordance with the Technical Regulations of the Customs Union «On the Safety of Wheeled Vehicles» (TRCU 018/2011) [1]. The main bodies performing the assessment of the conformity of vehicles in the structure of the Unified Institute of Mechanical Engineering of the National Academy of Sciences of Belarus are the following:  body for certification of vehicles, objects of their equipment and parts, control systems «AKADEM-SERT» – Research and development center «Certification of mobile machines»;  research and development center «Republican testing ground for mobile vehicles». The objectives of technical regulations in relation to vehicles are the following:  safety requirements for wheeled vehicles of categories M, N, O, L intended for operation primarily on public roads in order to protect human life and health, property, as well as to prevent actions that mislead consumers (users) regarding their purpose and safety;  requirements for vehicle safety, similar to those adopted in the developed countries; 167  implementation of international agreements to which Belarus is a party (Geneva Agreement of 1958) [2]. List of the main documents required for the certification of vehicles:  a general technical description of the type of a vehicle;  available at the filing date of the application, evidence supporting the compliance of products with the requirements of this technical regulation;  certificates of conformity;  protocols of vehicle certification tests in relation to individual requirements;  in the case of special and specialized vehicles, a certificate of vehicle identification and certification tests issued by the accredited testing laboratory regarding the applicable requirements;  in the case of a chassis, protocols of certification tests for individual requirements;  reports on type approval in respect of the UNECE Regulations, provided in the countries of the 1958 Agreement; Certification body «AKADEM-SERT» conducts a confirmation of conformity of the product:  vehicles of category L, M, N, O,  construction and road construction machinery,  items of equipment and spare parts of wheeled vehicles,  bicycles,  bench-mounting tool. The cost of certification depends on the labor hour and is about 250-750 $, depending on the work to be performed. For the period of the work the Certification Body issued:  6300 certificates of conformity for spare parts, vehicles;  5230 approvals of vehicle types;  957 messages concerning the official type approval under the UNECE Regulations; 168  120 certificates of conformity of quality management systems for compliance with the requirements of the standard STB ISO 9001 [3]. To implement the provisions of TRCU, training is required from manufacturers of vehicles, certification bodies, testing laboratories. Separate provisions of the regulations will lead to additional administrative and technical difficulties and will increase the time and cost of homologation. References: 1. The adopted technical regulations of the customs union (EEU) [Electronic resource]. – Mode of access: http://gosstandart.gov.by/approved-technical-regulations-of- the-customs-union-(eeu)/. – Date of access: 10.03.2018. 2. Procedure of product certification – main steps [Electronic resource]. – Mode of access: https://standartno.by/information/protsedura-sertifikatsii- produktsii/#itc_widget. – Date of access: 10.03.2018. 3. Certificate of Conformity of the National System for Conformity Assessment [Electronic resource]. – Mode of access: https://oim.by/ru/akadem-sert/organ-po-sertifikatsii- produktsii-i-uslug.html. – Date of access: 10.03.2018. 169 УДК 625.87:678.5 Savenkov A., Pedko L. Eternal Roads of the Future. Plastic Roads Belarusian National Technical University Minsk, Belarus Everybody knows that the decomposition of plastics is at least 150 years. However, the secondary use of plastic products in many countries is not well developed, which is extremely detrimental to the environment. And less waste plastic does not become. At the same time, in our country asphalt decomposes in a matter of months, that adversely affects the suspension of cars and the size of the state budget allocated annually for the repair of the roadway. And in some cases it affects accidents. After all, bad roads are the cause of serious accidents. The production of asphalt pavement and components for it is not in itself environmentally friendly. According to experts, CO2 emissions from asphalt production are 1.6 million tons per year, equivalent to two percent of total carbon dioxide emissions from the automotive industry. Scientists all over the world try to solve these problems, for the benefit of our humanity. The results of their work and creative thinking can be an innovation that, together with several technological solutions, can help solve several problems of our time. So the largest road construction company KWS Infra from Holland and specialists from VolkerWessels are going to start the first project on the planet in the near future to create timeless automobile routes from special plastic modules [1]. 170 Solar cells will be built into the modules for energy storage, and a unique luminescent paint applied from Acmelight Road, which accumulates solar energy in the daytime, and gives it away at night, in the form of a bright glow of up to 10-13 hours in pitch darkness, without electricity and even if it does not hit the headlights. Developers argue that the roads of the future should be as smart as cars. According to their assurances, the road of the future will warn drivers about temperature changes and obstacles on the road. For example, if there is a danger of ice on the asphalt, a luminous snowflake will appear, and if the road is dry, the sun will appear [2]. The material for this road designer is planned to be harvested in the World Ocean, where for a long time, according to scientists, whole floating islands of plastic waste have grown, which are emitted annually into the oceans, seas and rivers [3]. The first plastic automobile road in Europe is planned to be built before the end of 2017, in the city of Rotterdam. As the developers Anne Kudstaal and Simon Yorritsma note, the contract, which is signed by the companies KWS, Wavin and Total, will contribute to the construction, combining their experience, technical capabilities, knowledge and resources for the implementation of this innovative project. In general, these modules are able to withstand the same load as asphalt, but PlasticRoad has many advantages, compared to the usual road surface made of a mixture of stone, sand, asphalt and bitumen. The main advantages of this technology include:  Plastic road for its financial costs in production and assembling, will cost many times cheaper than conventional highways, which will save a lot of money.  Due to their low weight, the modules are easy to transport and assemble, and the soil is much less prone to 171 subsidence. If necessary, the road can be easily dismantled and installed elsewhere.  Mounting the slabs can be done on an aligned sand platform, and they are fit for laying down on almost any type of soil.  In case of an unexpected damage to the module, it can simply be replaced with a new one in the designer.  When the service life of the modules comes to an end, they can be recycled again to produce new modules.  The construction of these plates provides space for various communications, in particular, electrical and telephone cables, sewage and water pipes, gas distribution networks, other pipelines, drainage for sewage, etc.  In Europe, roads are calculated and built for 20-25 years. The lifetime of the road from plastic modules is 2-3 times greater than that of a road with a classic road surface.  For manufacturing PlasticRoad, recycled plastic waste is used, which will reduce environmental pollution.  These plastic modules are capable of withstanding temperatures from -40 to +80 °C, they are resistant to damage, wear, mechanical abrasion and corrosion. Withstanding temperature changes and stress, they will not show ruts and cracks from heavy transport, as it happens with our asphalt in hot weather. Plus plastic coating is not difficult to maintain in proper condition.  Roads from the new material due to their convenient form in the form of a designer will be erected in a few weeks, not months compared to the classical, which is much faster.  The proposed road design will lead to a significant reduction in emissions of carbon dioxide into the atmosphere, rather than in the production of asphalt.  When the vehicle is traveling on a plastic roadway, the wheels will produce less sound. And due to the special pattern 172 of the upper part of the mold, it is possible to increase the coefficient of friction of the surface of plastic modules with vehicle tires. It is also possible to use special anti-skid coatings on the surface. In the long term, this technological solution can help to find application for all those billions of tons of plastic debris, improve not only our roads, but also their erection, cutting costs. And also increase the safety and informative content on the roadways of communication [4]. References: 1. Plastic Road: A revolution in building roads [Electronic resource]. – Mode of access: https://www.plasticroad.eu/en/. – Date of access: 12.03.2018. 2. Plastic bottles and bags recycled to build roads [Electronic resource]. – Mode of access: https://news.sky.com/story/amp/plastic-bottles-and-bags- recycled-to-build-roads-11101612. – Date of access: 12.03.2018. 3. An Engineer Has Found a Way to Create Plastic Roads [Electronic resource]. – Mode of access: https://futurism.com/an-engineer-has-found-a-way-to-create- plastic-roads/amp/. – Date of access: 12.03.2018. 4. Plastic roads surface in the UK [Electronic resource]. – Mode of access: https://www.zdnet.com/article/plastic-roads- surface-in-the-uk/. – Date of access: 12.03.2018. 173 УДК 656.073.235 Nemchenko A., Pedko L. Container Lift System Belarusian National Technical University Minsk, Belarus The handling of containers doesn’t pose a particular problem when suitable infrastructures such as cranes, straddle carriers, reach-stackers or large forklifts are available, for example in hubs such as train yards, container terminals and large distribution centers. However this heavy duty equipment is typically capital intensive and is not always suited, or able to be efficiently transported, to the many locations ay which containers are packed, unpacked or otherwise handled. Various mobile equipment is used to facilitate container transport to and handling at these locations. Mainstream examples of this equipment include:  specialised self-loading container trailers  truck cranes  tilt bed or tilt deck trailers All of the heavy duty container handling equipment described above suffers from a variety of limitations, which may include:  high cost  lack of portability  high tare weight  inability to handle heavy containers  inability to handle all container types  requirement of a high or wide space in which to operate  requirement of a concrete or other reinforced surface on which to operate 174 And recognising that conventional container handling equipment is typically big, heavy and expensive, New Zealand- based BISON has introduced a compact, portable and more economical alternative aimed at extending the benefits of intermodal logistics to new frontiers. A portable container lift system includes a hydraulic linear actuator and a mounting arrangement for mounting the actuator to a shipping container. The system P32 includes a number of portable components that can be handled by a single worker (Figure 1). The BISON P-32 is easily transported between sites, sets up in minutes and allows containers of all sizes and weights up to 32 tons (70,000 lb.) to be lifted on and off trailers safely and efficiently. In its simplest form, the system P32 can attach to the four corners of a container and lift a container a small height off the ground, enabling the weight at each corner of the container to be measured by sensors. The convenience and efficiency provided by the use of ISO standardized containers for freight handling has led to their ubiquitous use throughout the world, on ocean, railroad and road. Figure 1 – A shipping container lift system, mounted to a shipping container which is located on a truck-trailer 175 It provides specialized lifting legs, which are attached to standard features of containers, such as the corner fittings in ISO containers, and enable the container to be raised vertically to a height that allows a truck-trailer to be positioned or removed from under the container. A portable container lift system includes weighing system, which include a hydraulic linear actuator and a mounting arrangement for mounting the actuator to a shipping container. The system may provide a means for weighing the container. The weight of containers and containerized freight can be measured using industrial weighting equipment or the P32 lift system [1]. A key part of the P32 design is BISON’s patent pending lift and lock mechanism, which reduces the size of the hydraulic system considerably, but still, enables heavy containers to be elevated 1.65 meters (65 inches) off the ground. This in turn reduces the size, weight and cost of the system. Importers and exporters can lift and ground containers more economically in factories or warehouses. Military, aid and project logistics operators can use the P32 to get containers in and out of remote locations more easily, avoiding reliance on local infrastructure. References: 1. Mark, J.F. Container lift and / or weighing system / J.F. Mark, H.M. Carsten. – Patent. – No.: WO 2015/026246 A2. 176 УДК 811. 111: 621.444.4 Svirski R., Piskun O. Electric Car Belarusian National Technical University Minsk, Belarus An electric car is a type of alternative fuel car that utilizes electric motors and motor controllers instead of an internal combustion engine. The electric power is usually derived from battery packs in the vehicle. The electric car is a relatively new concept in the world of the automotive industry. Although some companies have based their entire model of cars around being proactive and using electricity, some also offer hybrid vehicles that work off both electricity and gas. The electric motor has many advantages: It is safe to drive It does not pollute the air It has low maintenance It can be fueled at very cheap prices And many others But not everything is as perfect as it seems. There are also many shortcomings in these cars like: Recharge Points Electricity isn’t Free Short Driving Range and Speed Battery Replacement (3-10 years) Not Suitable for Cities Facing Shortage of Power Let’s turn to the history. The electric car was successful in the early 1900s. Women liked electric cars because they were quiet and, what was more important, they did not pollute the air. Electric cars were also easier to start than gasoline- 177 powered ones. But the latter was faster, and in the 1920s they became much more popular. The electric car was not used until the 1970s, when there were serious problems with the availability of oil. The General Motors Co. had plans to develop an electric car by 1980. However, soon oil became available again, and this car was never produced [1]. The future for electric cars looks to be a bright one. This is because of California’s zero-emissions policy, which has been adopted by several other states. The nations electric vehicle population is due to explode by the end of the decade. According to a study by the coalition, 65,364 new electric vehicles were available for sale in 2000 in California, Maine, Maryland, Massachusetts, New Jersey, and New York. The annual total of new electronics in those states is scheduled to rise to 700 000 in the year 2016 and to 1, 2 million in 2017. Also experts are looking for alternative sources for batteries. Some experts feel hydrogen fuel cells will be the dominant motor vehicle power source. The fuels cells convert hydrogen (an element in virtually limitless supply) directly into electricity without burning it to produce heat. Vehicles that are powered by hydrogen will be 3 times as energy-efficient as compared with gasoline-burning internal combustion engines. These cars will also be squeaky-clean because hydrogen powered vehicles only emit water vapor as exhaust [2]. Another alternative source for batteries contains thin sheets of plastic called proton-exchange membranes (PEMs). These separate hydrogen ions from electrons during operation. This type of battery seems to be the best suited for motor vehicle travel. This battery could yield a fuel cell that is light, compact and inexpensive to produce on a mass basis. Sam Romano, project manager of the fuel-cell program at Georgetown says PEM technology is «perhaps 10 to 12 years away from broad commercial application» [2]. 178 In all the motor vehicle market of the future is likely to feature several different fueling systems. There’s going to be a role for all of the technologies. Electric vehicles, in terms of light-duty trucks, cars and vans, make a great deal of sense. But for heavy-duty trucks, the battery technology just isn’t there at all. Consequently, despite the environmental advantages of electric vehicles, other alternative fuel technologies will remain on the scene – and even dominate certain vehicle markets [2]. References: 1. Mode of access: https://studopedia.su/13_174330_To-be- read-after-Lesson-.html. – Date of access: 01.04.2018. 2. Mode of access: http://www.bestreferat.ru/referat- 292463.html. – Date of access: 04.04.2018. 179 УДК 338.124.4 Shulga D., Yakubovich A., Piskun O. Tulip Mania: When Tulips Cost as Much as Houses Belarusian National Technical University Minsk, Belarus Crypto currencies and especially Bitcoin are the talk of the town of late. According to Consumer News and Business Channel, the price of a single bitcoin has gone up at a faster pace than any other speculative vehicle in market history, as investor enthusiasm for the new medium has reached a fever pitch. Some have likened the Bitcoin craze to Tulip Mania, believing that the bubble is getting ready to burst. Aside from the Bitcoin bubble, there have been a lot of economic bubbles and subsequent crashes over the years such as, the dot com bubble, the stock-market bubble, the real-estate bubble, but one you may have never heard of is the Tulip Bulb Market Bubble of 17th century Netherlands. Tulip mania is a perfect example of a cautionary tale of price speculation in what is widely regarded as the first recorded financial bubble and crash of all time [1]. The Roots of Tulip Mania So, what is the story with the tulip mania? Well, as some may be aware, the tulip is a national symbol of the Netherlands. The country is affectionately known by some as the flower shop of the world. The Dutch people even took their love of tulips abroad when emigrating from their homeland, starting up tulip festivals in places like New York and in the town aptly named Holland located in the U.S. state of Michigan. Despite this near obsession with tulips, the flower is not native to the Netherlands. They are actually native to the Pamir 180 and Tan Shan mountain ranges. They were brought to the Netherlands in the late-16th century from the Ottoman Empire. A botanist by the name of Carolus Clusius who in the 1590s had begun an important botanical garden at the University of Leiden, was one of the first to really pioneer the cultivation of tulips in the Netherlands. He had his own private garden in which he planted numerous bright and beautiful tulips and devoted much of his later life to studying the tulip and the mysterious phenomenon known as tulip breaking. Tulip breaking is key to the story of the tulip mania. It was a strange occurrence in which the petal colors of the flower suddenly changed into multicolored patterns. Many years later it turned out that these strange looking tulips were actually the result of a virus that had infected them. Nonetheless, these essentially diseased multicolored tulips did nothing but serve to ramp up the tulip craze further [1]. The mesmerizing diseased tulips became even more valuable than the uninfected ones and Dutch botanists began to compete with each other to cultivate new hybrid and more beautiful varieties of tulips. These became known as cultivars and would be traded among a small group of botanists and other flower aficionados. As time passed, the trade grew out from the group and botanists began to receive requests from people they did not know for not only the flowers, but the bulbs and seeds in exchange for money. Part of what helped this interest in Tulips grow, along with people’s willingness to exchange money for them, was the fact that the Netherlands in the early part of the 1600s had become the richest country in Europe mostly through trade. During this Dutch Golden Age, not only were there aristocrats with money, but middle-class merchants, artisans and tradesmen also found themselves with extra coin burning a hole in their pockets. Basically, this meant more people were able to spend money on luxuries such as cultivars that perhaps 181 in other European countries would not have been commonplace. Moreover, the Netherlands and specifically Amsterdam already had robust trading platforms. The Amsterdam Stock Exchange opened in 1602 and the Baltic Grain Trade, an informal futures market itself, had begun decades earlier. The Netherlands was therefore primed for a new trade, which was to become Tulip Mania [1]. The Bubble By the 1620s, prices were already rising to incredible levels. One story in particular was of an entire townhouse offered in exchange for just 10 bulbs of the very special cultivar, Semper Augustus, that had petals that looked a bit like a candy cane. That was only to be the crescendo, however, as the climax of tulip mania took place in Alkmaar at an auction shortly thereafter where cultivars Admirael van Enchuysen sold for 4,230 florins and 5,200 florins, respectively. By the height of the tulip and bulb craze in 1637, everyone had got involved in the trade, rich and poor, aristocrats and plebes, even children had joined the party. Much of the trading was being done in bar rooms where alcohol was obviously involved. According to some reports, bulbs could change hands upwards of 10 times in one day. Prices skyrocketed at one point in 1637, increasing 1,100% in a month. In just over a month from 31 December 1636 to 3 February 1637, Switsers, a particularly popular bulb saw its price rise from 125 florins to 1500 florins [2]. The Burst As is often the case with economic bubbles, as the price rose to a point where it was obviously so incredibly inflated, some prudent people decided to get out and capitalize on the absurd prices. Then a domino effect took place where more and more tried to sell at ever decreasing prices. The truth is that no 182 one is completely sure what lead to the cataclysmic demise of the bulb trade, but what is certain was that it caused unmitigated pandemonium and widespread panic throughout the republic. This is when parties involved began to stop honoring contracts. Needless to say, this was cause for much hubbub, as people realized they had bet their whole life savings or family homes on these tulip bulbs. The Dutch government even had to intervene to try to curb the fall, offering to honor contracts at 10% of the face value, however, this only worsened proceedings, as the price began to fall even farther until the bottom completely fell out. Of course, this resulted in financial ruin for many, as the bulbs that they had paid so highly for were worth virtually nothing. Debt disputes went on for years and even those that were lucky enough to get out early were hurt later by the depression in the aftermath of the crash. The Dutch government passed the buck by making a feeble proclamation that the debts were to be settled by local city magistrates. Eventually the majority of the contracts were cancelled [2]. References: 1. Mode of access: https://www.focus-economics.com/blog/tulip-mania-dutch- market-bubble. – Date of access: 19.03.2018. 2. Mode of access: https://projectauthenticity.org/2017/11/29/a-story-of-tulips- and-bitcoin/. – Date of access: 19.03.2018. 183 УДК 355.11 Motorin R., Pigulsky M., Piskun O. Russian Soldier of the Future Belarusian national technical university Minsk, Belarus Ratnik is a Russian future infantry combat system. It is designed to improve the connectivity and combat effectiveness of combat personnel in the Russian Armed Forces. Today there are three versions of the development of this equipment. The third is the newest one, which includes an exoskeleton. The Ratnik outfit is comprised of more than 40 components, including firearms, modernised body armour, a helmet with a special eye monitor (thermal, night vision monocular, flashlight), communication systems, and special headphones, an optical array, communication and navigation devices, as well as life support and power supply systems. The Ratnik - 2 outfit adds significantly to the soldier’s combat efficiency and survivability, not least because it’s lighter: at 20 kilos, it weighs only half as much as its predecessor [1]. As for the third generation Ratnik infantry combat kit, it features an array of unique integrated biomechanical tools, including exoskeletal elements; it features built-in microclimate support and a health monitoring system. The total weight of the kit is up to 22 kg in the expanded configuration (without combat stock and weapons). In general, 90 percent of the body surface of the serviceman is protected. The flak jacket has several varieties, from light to heavy with insert plates. The design assumes continuous wearing for a minimum of 48 hours. Transmission of video information from the sight to the eye indicator is carried out in wireless mode. The communication system will allow the soldier to communicate with the command and his 184 colleagues at a tactical level. Saturation with electronics makes the soldier a single combat system, controlled by the latest technologies. At the same time, information about the location of the serviceman is transferred to the command post, which greatly reduces the probability of loss without a trace. Modern lightweight 6B47 helmet Modern lightweight 6B47 helmet of the Armed Forces of the Russian Federation with Night Vision with camera and side rail, made for Ratnik combat gear. Helmet weight is 1kg, protects from Makarov pistol shots over 5 meters distance. Even with all of its defensive attributes, the helmet weighs less than its American counterpart, which is smaller but weight of this American counterpart 1.5 kilograms. The basic variant of 6B45 has a weight of approximately 8 kg. Control system Sagittarius Control system Sagittarius includes communication facilities, target designation, processing and display of information, identification allowing the transfer to the command post information about the whereabouts of the soldier, communicator that determines the coordinates of the serviceman with the help of GLONASS and GPS for solving the problem of orientation on the terrain and target designation and other applied calculations. The camouflage pattern of the Ratnik The camouflage pattern of the Ratnik field uniform makes the soldiers less visible to infrared cameras. The uniforms of reinforced-fiber fabric of polymeric compounds protect the soldier against open fire and minor splinters/ballistic shrapnel, while the body armor vest, reinforced by ceramic and hybrid inserts, is effective against small arms, including armor-piercing bullets preventing bullet penetration and trauma. The Ratnik uniform is fitted with special sensors that are designed to transmit information to military medics about a soldier’s physical state. Specially 185 designed sensors will continually record heart rate, respiratory rate, blood-oxygen saturation indicators and microvascular blood filling. The system will store and analyse this data and any deviation from the norm will trigger an alarm in the medical service. All information is automatically saved on a flash drive that stores medical history. Soldiers in medical units will have access to information about the condition of the wounded and their GPS coordinates. Based on the severity of the injuries, the state of a wounded soldier will be assessed on a scale of 0 to 5 [1]. This will help prioritize evacuation of the wounded and identify the best way of reaching them. Russian engineers have unveiled a unique thermal weapon sight for the Ratnik (Warrior) combat gear of the future. The tests of prototype Russian made thermal weapon sights visualizes for the user to see enemy soldiers in pitch darkness or in smoke on the battlefield. The system sensors can discriminate between objects even when the temperatures differ by one tenth of a degree. One cannot see camouflaged soldiers standing behind foliage with conventional night sights because they are blending with the terrain, but thermal imagers detect body heat. The new thermal sight becomes part of the Ratnik future soldier system and can detect enemy forces at ranges up to 1,200 meters. The gun sight is synchronized with a special helmet mounted eyepiece display. The soldier can put the rifle behind the corner by attaching the gun’s sight to the rifle. The soldier will see everything around the corner in real time while remaining safe. Every thermal sight undergoes a number of tests including heat tests inside special compartments that simulate temperature fluctuation between minus 50 to plus 70 degrees Celsius as well as tests to see how they react to vibrations and impact [2]. The structure of the Ratnik includes several other elements of weapons, such as: protective glasses 6B50, protecting the eyes and part of the face of the serviceman from 186 the fragments of ammunition; water treatment filters, autonomous heat sources; additional sights for weapons equipped with night vision and a thermal imaging aiming system; video module for shooting from the shelter. It consists of a thermal imaging sight and a helmet monitor with a control system, which displays an image from the sight; active headphones that allow you to communicate during a fight; sensors of the identification system for military vehicles and soldiers on the principle of their own-alien. To distinguish own from alien a serviceman equipped with such a sensor can, looking at the screen of a special device that looks like a mobile phone. It displays on the electronic map the location of the soldier and the location of friendly forces at a given time. Last version of Ratnik Russian scientists and engineers have begun to create combat equipment of the third generation. This is Ratnik-3. We know that Ratnik-3 includes a titanium exoskeleton that will increase physical strength and endurance, a flak body armor, a camouflage uniform that can be adjusted to weather conditions, an armored helmet with a flashlight, a display and a night vision device, as well as shoes with explosive sensors. References: 1. Mode of access: https://www.armyrecognition.com/russia_russian_military_fiel d_equipment/ratnik_future_soldier_individual_soldier_combat _gear_system_technical_data_sheet_specifications_pictures_vi deo_12205165.html. – Date of access: 20.03.2018. 2. Mode of access: http://www.sadefensejournal.com/wp/?p=3224. – Date of access: 20.03.2018. 187 УДК 623.454.362+811.111 Cherkashin N., Nesterovich R., Piskun O. Humanitarian Demining Belarusian National Technical University Minsk, Belarus Demining or mine clearance is the process of removing either land mines, or naval mines, from an area, while minesweeping describes the act of detecting of mines. There are two distinct types of mine detection and removal: military and humanitarian. Humanitarian demining, a core component of mine action, covers the range of activities which lead to the removal of mines and unexploded ordnance hazards. These include technical survey, mapping, clearance, marking, post-clearance documentation, community mine action liaison and the handover of cleared land. In general, humanitarian demining is regarded as a short-run emergency mine clearance of land with 100 percent efficiency. Humanitarian demining differs from military mine clearance mainly in its purpose. The purpose of humanitarian demining is to clear the land from mines and other explosive remnants to return to the end users, whereas military mine clearance is intended to open a passage for troops. Therefore, the military may breach a path through a minefield without destroying every single mine in the path. Demining for humanitarian purposes is slow due to its 100 percent clearance requirement, and it is dangerous because a simple mistake can cost the lives of the operators. In some situations, clearing landmines is a necessary condition before other humanitarian programs can be implemented. A large scale international effort has been made to test and evaluate existing and new technologies for 188 humanitarian demining, notably by the EU, US, Canadian and Japanese governments and by the Mine Action Centers of affected countries. Humanitarian demining programs are often aimed at quickly safeguarding people living with the threat of landmines. Peacekeeping forces need safe movement to carry out their activities. Additionally, food, medicine, temporary shelter, or some emergency materials may need to be delivered to those who need it. When such activities are obstructed by the presence of landmines, a humanitarian demining is imperative. Demining activity can be limited to opening access roads, clearing residential areas, creating temporary relocation places, and the like. Demining to allow such emergency assistance can be acceptable; however, it should only be for a short period of time. If it goes beyond a short period of time or demining is no longer for emergency purposes, then there must be a justification for its value. When demining for such purposes exceeds the emergency need, it is difficult to defend its cost especially in countries where they have other humanitarian needs. Therefore, demining for humanitarian purposes should not last a very long time. Otherwise, demining for humanitarian purposes will not justify the cost. In an emergency situation the cost of demining can be defended. For example, when people need to return home and if access is not provided, people will either die or be restrained from returning. When many people die demining can be justified because the benefit from demining can be proven against the cost of many people’s lives. Moreover, when people are restrained from returning they need to be supplied with all their needs. To supply human needs forever is very costly, and thus demining for the return of displaced people is beneficial. In the absence of access to roads due to mines to a community who needs emergency aid, demining again justifies 189 its cost because aid will have to be delivered by other means such as helicopters or planes, which is more expensive than road transportation. However, when road access is provided through demining and people are returned back to their homes, they will still need to build their daily lives. This can be through using their farmlands, breeding cattle, using water wells, developing a power supply, going to school, and rebuilding their residential areas or any other daily activities. In such situations, the cost of demining needs to be calculated in comparison to its benefits. The decision makers should show that demining activities to provide such access to the community have a benefit greater than the associated cost. Every plan of the demining activity should be linked to promotion of the development of the community. If demining is not linked to development it will be difficult to justify it for only humanitarian purposes. The prioritization of demining in terms of the outcome of the land to be cleared should be calculated against the cost and set in place before any demining activity. If one cannot do this, resources will be wasted because the short-run humanitarian need will change to a development requirement and it will be hard to justify the cost in relation to the benefits. Therefore, after emergency needs are resolved, the next steps for demining should be conducted based on a cost-benefit analysis. 190 УДК 811.111:629.351(1-87) Shevcov N., Buk I., Piskun O. The Development of Military Engineering Belarusian National Technical University Minsk, Belarus Military engineering is that engineer activity undertaken, regardless of component or service, to shape the physical operating environment. Military engineering incorporates support to maneuver and to the force as a whole, including military engineering functions such as engineer support to force protection, counter-improvised explosive devices, environmental protection, engineer intelligence and military search [1]. The first civilizations to have a dedicated force of military engineering specialists were the Romans, whose army contained a dedicated corps of military engineers known as architects. This group was preeminent among its contemporaries. The scale of certain military engineering feats, such as the construction of a double-wall of fortifications 30 miles (48 km) long, in just 6 weeks to completely encircle the besieged city of Alesia in 52 B.C.E., is an example. Such military engineering feats would have been completely new, and probably bewildering and demoralizing, to the Gallic defenders. In ancient times, military engineers were responsible for siege warfare and building field fortifications, temporary camps and roads. The most notable engineers of ancient times were the Romans and Chinese, who constructed huge siege-machines (catapults, battering rams and siege towers). The Romans were responsible for constructing fortified wooden camps and paved roads for 191 their legions. Many of these Roman roads are still in use today [2]. Military engineers planned castles and fortresses. When laying siege, they planned and oversaw efforts to penetrate castle defenses. When castles served a military purpose, one of the tasks of the sappers was to weaken the bases of walls to enable them to be breached before means of thwarting these activities were devised. With the 14th-century development of gunpowder, new siege engines in the form of cannons appeared. In England, the challenge of managing the new technology resulted in the creation of the Office of Ordnance around 1370 in order to administer the cannons, armaments and castles of the kingdom. Both military engineers and artillery formed the body of this organization and served together until the office's predecessor, the Board of Ordnance was disbanded in 1855. By the 18th century, regiments of foot (infantry) in the British, French, Prussian and other armies included pioneer detachments. In peacetime these specialists constituted the regimental tradesmen, constructing and repairing buildings, transport wagons, etc. On active service they moved at the head of marching columns with axes, shovels, and pickaxes, clearing obstacles or building bridges to enable the main body of the regiment to move through difficult terrain. The modern Royal Welch Fusiliers and French Foreign Legion still maintain pioneer sections who march at the front of ceremonial parades, carrying chromium-plated tools intended for show only. The dawn of the internal combustion engine marked the beginning of a significant change in military engineering. With the arrival of the automobile at the end of the 19th century and heavier than air flight at the start of the 20th century, military engineers assumed a major new role in supporting the movement and deployment of these systems in war. Military engineers gained vast knowledge and experience in explosives. 192 They were tasked with planting bombs, landmines and dynamite [2]. At the end of World War I, the standoff on the Western Front caused the Imperial German Army to gather experienced and particularly skilled soldiers to form Assault Teams which would break through the Allied trenches. With enhanced training and special weapons (such as flamethrowers), these squads achieved some success. In early WWII, however, the Wehrmacht Pioneer battalions proved their efficiency in both attack and defense, somewhat inspiring other armies to develop their own combat engineers battalions. Notably, the attack on Fort Eben-Emael in Belgium was conducted by Luftwaffe glider-deployed combat engineers. The need to defeat the German defensive positions of the Atlantic wall as part of the amphibious landings in Normandy in 1944 led to the development of specialist combat engineer vehicles. These, collectively known as Hobart's Funnies, included a specific vehicle to carry combat engineers, the Churchill AVRE. These and other dedicated assault vehicles were organized into the specialized 79th Armored Division and deployed during Operation Overlord – D-Day. Engineer troops developed significantly in the Russian Army during the Seven Years’ War of 1756-1763, which demanded engineer preparation for sieges of strong fortresses (Kolberg and others), troop crossings of the Neman and Vistula, and other work. In 1802 the engineering department was formed. In the early 19th century engineer troops consisted of engineer and pontoon regiments (six to ten companies each). In 1816 battalion organization of engineer troops was instituted, with one engineer or one sapper battalion for each corps. The Soviet engineer troops were created when the Red Army was organized. According to the 1918 table of organization, divisions were to have an engineer battalion 193 (1,263 men), rifle brigades were to have a sapper company (361 men), and rifle regiments were to have a sapper team (60 men). In 1919 special engineer units (pontoon and electrical engineer battalions and detached camouflage companies) were formed. During the Civil War more than 100 soldiers from engineer units were awarded the Order of the Red Banner for heroism. The engineer troops were led by the inspector of engineers at the Field Headquarters of the Republic (A. P. Shoshin from 1918 until the end of 1921), by the chief engineers of fronts and armies, and by division engineers. In 1941 the engineer troops consisted of troop, army, and district units; in addition, the Reserve of the Supreme Command had two battalions and one company of engineer troops. In early 1941 the district and army engineer units were reorganized into engineer and pontoon regiments. Early in the Great Patriotic War of 1941–45 (October 1941) combat engineer armies were formed to carry on engineer preparation of defensive lines (by January 1942 there were ten armies). In February 1942five of the combat engineer armies were inactivated, and the others were made subordinate to fronts and later also abolished. From 1942 the basic organizational form of engineer troops in the Reserve of the Supreme Command became engineer brigades; in 1944 they were included in the composition of the fronts and armies [2]. References: 1. Mode of access: https://everipedia.org/wiki/Military_engineering/. – Date of access: 23.03.2018. 2. Mode of access: https://en.wikipedia.org/wiki/Military_engineering. – Date of access: 23.03.2018. 194 УДК 811.111:339.133.3 Baskleev Y., Dudchenko G., Piskun O. The Biggest Scam in the History Belarusian National Technical University Minsk, Belarus The Federal Reserve has put the US in a $19 trillion dollar debt, while the owners have become the richest people on the planet. Almost every citizen in the United States and Europe (roughly 1,063,143,000 people) have been robbed by the Federal Reserve. Most people think that the Federal Reserve is a government institution, but that is not the case, despite its deceiving name. The Federal Reserve is a privately owned company, in charge of central banking. It has set up a banking system, that slowly drains all wealth and resources from everyone in its economy. People, Companies, Government, Everyone! Even though the Federal Reserve's money scam is almost invisible on a global scale, the trick is fairly simple. The Federal Reserve controls the creation of US dollars. The money is printed by the US Mint, but it's strictly controlled and managed by the Federal Reserve, and they only have it printed when someone takes a loan. Therefore, every single dollar that exists is already in debt as soon as it's printed. When scaled down it becomes clear what this means. Let's say the economy got reset. There is 0 dollars in the economy. Then someone takes a $1000 loan from the bank. Now, $1000 exist in the economy, nothing more. But the bank wants their $1000 back, plus interest. This gives an impossible scenario, as paying back $1000 + interest is impossible, when only $1000 exist. Now there are only two options. One: the bank sends IRS to take whatever money is left, and the loan-taker's house or car or whatever they see fit. Option two: the loan-taker goes and 195 takes another loan, bringing more money into the economy, but also more debt. Option one is usually what happens when private people can't pay. Option two is what happens when the government can't pay, that is why they raise the country's debt year after year. The most significant part of the scam, is that the government loans their money from the Federal Reserve too! People then have to pay off this loan via taxes. If you thought your tax money went to building roads and schools or libraries and such, you are sadly mistaken. Almost everything goes to paying off debt. A never ending debt, because the more money the Federal Reserve prints, the more debt is created as well. This system ensures that there will always be more debt than money in the economy, which puts everyone collectively in the pockets of the bank. When this scam is performed on the entire Western World, and most other countries via the oil industry, it becomes invisible, and the Federal Reserve and the whole banking industry get away with it. The most genius thing about the scam, is that the Federal Reserve does not even have any gold to back up the money they print! They just add numbers on their computers, and then order the money printed and distributed, however they want. Therefore, the US dollar is in reality nothing more than Monopoly money. Completely worthless. You, or anyone else, could make a company and print little notes called money, and then start lending them to people with interest. People as a whole, will always be in debt, no matter what. People are fighting for money that doesn't exist, and therefore someone will go bankrupt from time to time. When a bankruptcy happens, the banks take that person's house or car or whatever they can. Something of real value, even though the money that was lend out, was just worthless pieces of paper. The Federal Reserve made a trick, turning paper into gold, and they hid it from the public via this system. 196 УДК 004.946 Stoiko Y., Rybaltovskaya E. Industry 4.0 Belarusian National Technical University Minsk, Belarus Industry 4.0 signifies the promise of a new Industrial Revolution – one that marries advanced production and operations techniques with smart digital technologies to create a digital enterprise that would not only be interconnected and autonomous but could communicate, analyze, and use data to drive further intelligent action back in the physical world. It represents the ways in which smart, connected technology would become embedded within organizations, people, and assets, and is marked by the emergence of capabilities such as robotics, analytics, artificial intelligence and cognitive technologies, nanotechnology, quantum computing, wearables, the Internet of Things, additive manufacturing, and advanced materials. While its roots are in manufacturing, Industry 4.0 is about more than simply production. Smart, connected technologies can transform how parts and products are designed, made, used, and maintained. They can also transform organizations themselves: how they make sense of information and act upon it to achieve operational excellence and continually improve the consumer/partner experience. In short, Industry 4.0 is ushering in a digital reality that may alter the rules of production, operations, workforce – even society. It’s now possible to create a smart factory where the Internet, wireless sensors, software and other advanced technologies work together to optimize the production process 197 and improve customer satisfaction. These tools allow a business to react more rapidly to market changes, offer more personalized products and increase operational efficiency in a cycle of continuous improvement. Industrial Revolutions over the ages c. 1780 Industry 1.0 – Mechanization Industrial production based on machines powered by water and steam c. 1870 Industry 2.0 – Electrification Mass production based on the assembly line c. 1970 Industry 3.0 – Automation Automation based on electronics and computers c.1980 Industry 3.5– Globalization Offshoring of production to low‑cost economies based on lower communication and containerization costs Today Industry 4.0 – Digitization Introduction of digital technologies Industry 4.0 uses digital technologies to react more rapidly to market changes, offer more personalized products and increase operational efficiency. Industry 4.0 touches everything in our daily lives. The Fourth Industrial Revolution is important to understand because it doesn’t just touch manufacturers – it can touch all of us. While Industry 4.0 has grown to encompass business operations, the workforce, and society itself, its roots in the supply chain and manufacturing constitute the backbone of the world as we know it. What things are made of, how they are made, where they are made and how they get to us, and where they go when we need them fixed or we’re done using them: All of these things are part of the production life cycle. Industry 4.0 will likely change how we make things, but it could also affect how those things are moved (through autonomous logistics and distribution), how customers interact with them, and the experiences they expect to have as they 198 interact with companies. Beyond that, it could drive changes in the workforce, requiring new skills and roles. Industry 4.0 integrates the digital and physical worlds. The digitization of operations, manufacturing, supply networks, and products enables companies to combine learnings from humans, machines, analytics, and predictive insights to hopefully make better, more holistic decisions. Fully connected processes present huge opportunities: Rather than monitoring processes in a linear fashion, as has always been done, and operating reactively, companies can take learnings along the way and feed them back into the process, learn from what they are seeing, and adjust accordingly in real or near real time. This should lead to smarter decisions, better-designed products, service and systems, potentially more efficient use of resources, and a greater ability to predict future needs. The digital thread represents one such end-to-end Industry 4.0 solution, linking the entire design and production process with a seamless strand of data that stretches from the initial design concept to the finished part. Beyond the digital thread, the use of the digital twin can enable organizations to gain insight into the inner workings of systems or facilities, simulate possible scenarios, and understand the impacts of changes in one node on the rest of the network. The benefits of introducing digital technologies. The benefits to manufacturers of adopting digital technologies are real. >60% of adopters say digital technologies helped boost their productivity. The main driver of productivity growth in a smart factory is the capacity to predict and prevent downtime, and to optimize equipment effectiveness and maintenance. > Almost 50% say they save operating costs. Savings may come from the following processes: real-time production monitoring and quality control to reduce waste and rework – predictive maintenance to prevent costly repairs and unplanned 199 downtime – higher automation to save labour costs and improve throughput – the use of 3-D printers to achieve faster prototyping, reducing the cost of engineering and accelerating time to market > 42% say they have improved overall product quality. For instance, real-time quality controls allow you to reduce, or even eliminate, customer returns that occur when products do not meet specifications. > 13% identified greater capacity to innovate as a benefit. While this is a low score, we believe greater innovation may unlock the most value for your business. New business models made possible by smart products and new advanced technologies, such as 3-D printing, are only beginning to emerge. They promise to spark innovation on a monumental scale over the next five to 10 years. We are already seeing inspiring examples of how small businesses are using connected products and customization to reinvent themselves in the digital context. It’s time to get started! The digital age has arrived. New digital technologies are changing the way products are developed, manufactured and delivered to customers. In fact, there’s never been a better time to get involved – technologies have matured and become more affordable and user-friendly. The time is right to join the Industry 4.0 revolution. 200 УДК 004.738.5 Andreev D., Akulov S., Slesarenok E. Web Development Belarusian National Technical University Minsk, Belarus It’s no secret that in our time people are dependent on the Internet. That why web developing is so popular. Web development is a broad term for the work involved in developing a web site for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing the simplest static single page of plain text to the most complex web-based internet applications (or just web apps) electronic businesses, and social network services. Web development includes: web engineering, web design, web content development, client liaison, client-side/server-side scripting, web server and network security configuration, and e-commerce development. Among web professionals, web development usually refers to the main non-design aspects of building web sites: writing markup and coding. Most recently Web development has come to mean the creation of content management systems or CMS. These CMS can be made from scratch, proprietary or open source. In broad terms the CMS acts as middleware between the database and the user through the browser. A principle benefit of a CMS is that it allows non- technical people to make changes to their web site without having technical knowledge. Web development also involves web engineering, web design, web content development, client-side/server-side scripting, web server and network security configuration. Web development usually refers to the main non-design aspects of building web sites: writing markup and coding. Most recently 201 Web development has come to mean the creation of content management systems or CMS. For larger organizations and businesses, web development teams can consist of hundreds of people. Smaller organizations may only require a single permanent or contracting developer, or secondary assignment to related job positions such as a graphic designer or information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department. There are two kinds of web developer specialization: front-end and back-end developers. Front-end developers deal with the layout and visuals of a website, while back-end developers deal with the functionality of a website. Back-end developers will program in the functions of a website that will collect data. Front-end web development is the practice of producing HTML, CSS and usually JavaScript for a website or Web Application so that a user can see and interact with them directly. Front-end languages are HTML, CSS, JavaScript. The back end of a website is a combination of technology and programming that powers a website, the behind-the-scenes functionality or brain of a site. This back end of a website consists of three parts that a user never sees: a server, an application, and a database. Back-end developers use languages like PHP, Ruby, Python, Java, and .Net to build an application, and tools like MySQL, Oracle, and SQL Server to find, save, or change data and serve it to the user in front end code. In development is often used MVC (Model-View- Controller) is an architectural pattern commonly used for developing user interfaces that divides an application into three interconnected parts. This is done to separate internal representations of information from the ways information is presented to and accepted from the user. The MVC design 202 pattern decouples these major components allowing for efficient code reuse and parallel development [1]. References: 1. Wikipedia [Electronic resource]. – Mode of access: https://en.wikipedia.org/wiki/Web_development_tools. – Date of access: 02.04.2018. 203 УДК 658.512.2 Savchits D., Slesarenok E. Industrial Design Belarusian National Technical University Minsk, Belarus Industrial design is the professional practice of designing products used by millions of people around the world every day. Industrial designers not only focus on the appearance of a product, but also on how it functions, is manufactured and ultimately the value and experience it provides for users. Every product you have in your home and interact with is the result of a design process and thousands of decisions aimed at improving your life through design. If architects design the house, then industrial designers design everything inside. Emerging as a professional practice in the early 19th century, industrial design has come a long way since its early inception and is thriving as a result of an expanded awareness of design in business, collaboration and critical problem solving. Pioneers like Charles and Ray Eames, Henry Dreyfuss and Dieter Rams paved the way for modern industrial designers such as Jony Ives, Yves Béhar, and Pattie Moore, FIDSA, to stand at the forefront of modern industrial design. “Design is a plan for arranging elements in such a way as best to accomplish a particular purpose” – Charles Eames. Today, there are a lot of industrial designers all over the world and the impact of the profession on modern society is immense. Industrial designers are responsible for designing everything from cars and toasters to smart phones and life- saving medical equipment. The breadth of work and social 204 impact created at the hands of industrial designers across the world is truly amazing. In professional practice, industrial designers are often part of multidisciplinary teams made up of strategists, engineers, user interface (UI) designers, user experience (UX) designers, project managers, branding experts, graphic designers, customers and manufacturers all working together towards a common goal. The collaboration of so many different perspectives allows the design team to understand a problem to the fullest extent, then craft a solution that skillfully responds to the unique needs of a user. Industrial designers design products for users – mainly people–but sometimes pets – of all races, ages, demographic, social status or ethnicity. To do this, empathy is a core attribute of the design process. An empathetic designer is able to walk in someone else’s shoes through research and observation to glean insights that will inform the rest the design process and ultimately result in a design solution that solves a problem in a beneficial and meaningful way. Industrial designers require a fair amount of formal education. Learn about the education, job duties and employment outlook to see if this is the right career for you. Industrial designers work with other professionals in designing ideas for clients and turning those ideas into new products. Successful industrial designers don’t stop with a bachelor’s degree. Master’s degree programs can increase an industrial designer’s marketability for potential job offers. In the ideation, or concept, phase of a project, designers will sketch, render, 3D model, create prototypes and test ideas to find the best possible solutions to a user’s needs. This phase of the design process is messy, fast paced and extremely exciting! By testing, breaking and rebuilding prototypes, designers can begin to understand how a product will work, look and be manufactured. 205 In the final stages of the design process, industrial designers will work with mechanical engineers, material scientists, manufacturers and branding strategists to bring their ideas to life through production, fulfillment and marketing. After months, and sometimes years, of development, a product will find its way to store shelves around the world where people can purchase it and bring it into their homes. It’s fun being a designer. They use their hands, heads, and hearts. Designers get to invent things and then make them into real things-things that we want. They use their heads for strategy, tactics, science, and thinking ahead. Designers actually make things with their hands: drawings, models, and samples. And use their own emotions to connect with the hearts so that people will want what we created. The combination is what makes being designer so interesting and valuable. 206 УДК 004.382 (476) Laptsionak U., Slesarenok E. “Minsk” Family of Computers Belarusian National Technical University Minsk, Belarus During 10 years, from 1959 till 1969, several types of general-purpose computers had been developed in Belarus. These machines had become basics for the solved fleet of computers and their large-scale production was organized. The Minsk machines actually faced no competition with other small general-purpose computers and easily became the basic model of this computer type. In 1956, upon completing the stage of development of the first computers, the resolution of the USSR Council of Ministers aimed at enforcing the expansion of computer production in the country was issued. In 1958, on the basis of the Ordzhonikidze factory in Minsk, a Special Design Department was organized to support and modernize the computers produced by the factory. Subsequently, it was transformed into an independent design and research company – NIIEVM working to this day. The first completely original project at the plant became a computer names “Minsk-1”. The Development of the device occurred in a fairly short time within 18 months. In parallel with the design of the machine, the Department also worked on preparing its series production. Computer testing took place in September 1960, and the first production samples appeared in the same year. The speed of the computer was estimated at 2.5 thousand operations per second (for comparison: the speed developed by the Moscow Institute of Electronic Control Machines computer M-3 was about 30 operations per second). It was the “Minsk-1” model 207 (800 valves, 2500 instructions per second, a ferrite memory for 1KWord, 31-bit word length, a 2-addresses-for-operands instruction set, with point fixed before the highest bit, a peripheral memory on a magnetic tape for 64KWord, a punched tape input at 80 words per second, and a digital printing output at 20 words per second). Programming for this computer was carried out in machine code, but a library of 100 programs was supplied together with the machine. Also some of the world’s first auto-programming systems – translators “Autocode Inzhener” and “Autocode Economist” – were developed for “Minsk-1”. Another competitive advantage of the machine was its relatively modest size. It took about 4 square meters of space to accommodate the entire system, while some other computers (for example, the Moscow BESM) took as much as 100 square meters. All this has allowed the computer “Minsk-1” in the first half of the 60’s to become the leading type of tube production machines in the entire USSR. For four years, from 1960 to 1964, 230 “Minsk-1” computers were made, including a number modified for various industries. “Minsk-1” had been produced up to 1964 and it had several fully inter-compatible versions: “Minsk-11” was designed for seismic data processing and for remote users. Eleven computers of this model were manufactured; “Minsk- 12” had an extended main memory for 2048 KWord and tape drives for 100KWord. Five machines of this model were turned out; “Minsk-14” and “Minsk-16” were designed for telemetric data processing and equipped with appropriate reading devices. 36 “Minsk-14” machines and 1 “Minsk-16” machine were brought out. “Minsk-100” was created by order of the Ministry of Interior of the USSR for the detection and storage of fingerprints and became the original fingerprint computer storage and retrieval system. 208 In parallel with the release of “Minsk-1” in 1960-1962, the second generation of the computer, “Minsk-2”, was developed, which represents the first semiconductor computer in the USSR. The speed of the device was estimated at 5-6 thousand operations per second. It is important that it was “Minsk-2” that became the first computer in the whole of the USSR, which had the ability to enter and process textual information (before all machines worked exclusively with digital data). In 1963, series production of “Minsk-2” was launched. In total, the plant produced 118 “Minsk-2” computers. A number of modified computers were also created on the basis of “Minsk-2”. “Minsk-26” and “Minsk-27” were intended, for example, for data processing, from meteorological rockets and Earth satellites “Meteor”. The most common model was the “Minsk-22” computer (734 devices were produced in total), which, in comparison with the base model, had several times more RAM and a tape drive. The device was extremely popular in the field of planning and economic calculations. But the most breakthrough model can be considered a computer “Minsk-23”, released in 1966. The speed of “Minsk-23” was about 7 thousand operations per second. It used many unique, technical developments, allowing the machine to work in multiprogram mode. At the same time, up to 3 working and 5 utility programs could be executed on the machine. The machine was fitted with a punched card reader (600 cards per second), a punched tape reader (1000 strings per second), an alphanumeric printer (400 strings per min), a card puncher (100 cards per min) and a tape puncher (80 characters per second). For the first time in the domestic computer history, “Minsk-23” was equipped with magnetic type drive – a rolled-type storage device which stored 32 bits per mm and was compatible with similar western drives. For this purpose, the machine was 209 supplied with the first in the USSR operating system “Dispatcher”. Several large Soviet enterprises were based on the “Minsk-23” computer. The system, for example, was used in the Moscow association Mosmoloko, also based on it, a system was built for selling and reserving Aeroflot air tickets. But commercially successful “Minsk-23” can not be mentioned. Only 28 computers were manufactured. This failure, probably, derived from the fact that the underlying ideas of the computer were not transparent to users, there was no compatibility with the previous model, its performance was insufficient for scientific and engineering tasks, and the demand for business data processing was not developed at the enterprises and organizations. “Minsk-32” computer came out in 1968 and absorbed all the best developments of previous models in the series. In addition to a significant increase in performance (the machine had a speed of about 30-35 thousand operations per second), the presence of a multiprogram operation system (up to four independent programs could work simultaneously) and the possibility of creating multi-machine systems based on it, the “Minsk-32” software compatibility with the previous computers of the “Minsk” family. The creation of complex and costly programs that operate only on a single hardware and software complex was common practice in the 1960s, so the implementation of such compatibility became a true innovation of “Minsk-32”, not only Soviet analogs, but also the majority of foreign computers. From 1968 to 1975, 2889 of these machines were produced, but despite such popularity, “Minsk- 32” became the last representative of the entire family of the Minsk computer. In 1970 the team of Minsk development engineers and manufactures, who had produced over 4000 computers were awarded with the USSR State Prize. 210 УДК 725.826 Shimanovitch M., Slesarenok E. Stadium Construction Belarusian National Technical University Minsk, Belarus The construction of the stadium is a complex process, consisting of several stages, from the development of the conceptual framework to the solemn opening of the sports ground and its subsequent exploitation. Concept development. Concept development is a long process that requires a thorough approach, in which designers and stakeholders must find a solution, which will satisfy requirements and interests of all project participants. The end goal of this designer stage is the creation of sketches and layouts, which will reflect individual features of the stadium. Construction design. The stadium shell is the way it is, due to geometry of stands and the concept of the building. The facade is constructed of steel and concrete (the material chosen depends on the project). Competitions that will be held at the stadium, as well as its capacity – are the factors that determine the optimal geometry and the form of the stands. Most often, the stands are designed in the form of the following geometric figures: 1) Rectangle. Due to the geometric shape in corner locations, visibility is far from ideal, that’s why during design stage these seats are not even envisaged. On one hand, this shape gives viewers the opportunity to be closer to the field, on the other, reduces the capacity of the stadium; 2) Oval. Oval stadium design allows not only to hold football tournaments, but to also host other sport events, such as athletic and cricket competitions. This form is the most widely used shape in construction of stadiums, which host track and field 211 competitions; 3) Rectangle with rounded corners. This form attracts attention with its absence of sharp corners, which provide a good visibility of the field and a better view from the angular sectors. Construction. As a rule, the construction of the stadium is carried out on the following principles: Construction of the supporting infrastructure involves a detailed analysis of the location; besides, the project may require access to roads, water supply systems, power grids, drainage and sewage systems, and other infrastructural facilities. Foundation work includes the following: in the soil, a pile system is required to be installed in order to provide the necessary deep foundation reinforcement; the foundation is laid on the pile system to distribute the stadium load from the columns through the foundation and on the piles. Concrete. The main elements of the building are foundation, columns, load bearing beams, floor slabs, elevator shafts and stairs. It should be stressed that the concrete structure of the stadium is the foundation that forms the overall shape of the stadium. Pre-manufactured elements. The largest of the pre- manufactured elements in the construction of the stadium are the stand tiers. After the columns and the supporting beams have been installed, the ready-made elements of the stands are installed next. The lower tier is mounted first. At the same time, the construction of the stadium structure continues. When the columns and supporting beams are mounted to the upper levels of the stadium, the upper tiers of the stands can then be installed. The following step involves installation of individual structures such as: machinery, electrical and plumbing equipment. Then comes roof construction and façade. For the construction of the roof and the facade of the stadium, it is necessary to install vertical support elements. In most cases, 212 steel columns are used in order to support the facade and the roof. Elements along the perimeter of the roof, such as cables or parts of steel cables, are mounted on to vertical supports. When the main parts of the stadium structure are ready, facade and roof facing elements can be installed. Furniture, fittings and equipment are installed closer to the end of the construction. In comparison to other facilities, stadium requires a large number of furniture elements, such as: large number of seats, large number of restrooms and many more. 213 УДК 629.3.027.5 Goncharevich V., Slesarenok E. Tires Belarusian National Technical University Minsk, Belarus The friction (traction) between the tire and the road determines the handling characteristics of any vehicle. Think about this statement for a second. The compounding, construction, and condition of tires are some of the most important aspects of the steering, suspension, alignment, and braking systems of any vehicle. A vehicle that handles poorly or that pulls, darts, jumps, or steers funny may be suffering from defective or worn tires. Understanding the construction of a tire is important for the technician to be able to identify tire failure or vehicle handling problems. Tires are mounted on wheels that are bolted to the vehicle to provide the following: shock absorber action when driving over rough surfaces; friction (traction) between the wheels and the road. All tires are assembled by hand from many different component parts consisting of various rubber compounds, steel, and various types of fabric material. Tires are also available in many different component parts consisting of various rubber compounds, steel, and various types of fabric material. Tires are also available in many different designs and sizes. Tread refers to the part of the tire that contacts the ground. Tread rubber is chemically different from other rubber parts of a tire, and is compounded for a combination of traction and tire wear. Tread depth is usually 11/32 in. deep on new tires (this could vary, depending on manufacturer, from 9/32 to 15/32 in.). Wear indicators are also called wear bars. When tread depth is down to the legal limit of 2/32 in., bald strips 214 appear across the tread. Tie bars are molded into the tread of most all-season-rated tires. These rubber reinforcement bars are placed between tread blocks on the outer tread rows to prevent unusual wear and to reduce tread noise. As the tire wears normally, the tie bars will gradually appear. This should not be mistaken for an indication of excess outer edge wear. A tire tread with what appears to be a solid band across the entire width of the tread is what the service technician should consider the wear bar indicator. Grooves are large, deep recesses molded in the tread and separating the tread blocks. These grooves are called circumferential grooves or kerfs. Grooves running sideways across the tread of a tire are called lateral grooves (Fig. 1). Figure 1 Grooves in both directions are necessary for wet traction. The trapped water can actually cause the tires to ride up on a layer of water and lose contact with the ground. This is called 215 hydroplaning. With worn tires, hydroplaning can occur at speeds as low as 30 mph on wet roads. Stopping and cornering is impossible when hydroplaning occurs. Sipes are small slits in the tread area to increase wet and dry traction. The sidewall is that part of the tire between the tread and the wheel. The sidewall contains all the size and construction details of the tire. Some tires turn brown on the sidewalls after a short time. This is due to ozone (atmosphere) damage that actually causes the rubber to oxidize. Premium-quality tires contain an anti-oxidizing chemical additive blended with the sidewall rubber to prevent this discoloration. The bead is the foundation of the tire and is located where the tire grips the inside of the wheel rim. The bead is constructed of many turns of copper- or bronzecoated steel wire. The main body plies (layers of material) are wrapped around the bead. Most radial-ply tires and all truck tires wrap the bead with additional material to add strength. Body ply. A tire gets its strength from the layers of material wrapped around both beads under the tread and sidewall rubber. This creates the main framework, or carcass, of the tire; these body plies are often called carcass plies. A 4-ply tire has four separate layers of material. If the body plies overlap at an angle (bias), the tire is called a bias-ply tire. If only one or two body plies are used and they do not cross at an angle, but lie directly from bead to bead, then the tire is called radial ply (Fig. 2). Rayon is a body ply material used in many tires because it provides a very smooth ride. A major disadvantage of rayon is that it rots if exposed to moisture. Nylon is a strong body ply material. Though it is still used in some tires, it tends to flat-spot after sitting overnight. Aramid is the generic name for aromatic polyamide fibers developed in 1972. Aramid is several times stronger than steel (pound for pound), and is used in high-performance-tire construction. Polyester is the most commonly used tire material because it 216 provides the smooth ride characteristics of rayon with the rot resistance and strength of nylon. Figure 2 - Typical construction of a radial tire. Belt. A tire belt is two or more layers of material applied over the body plies and under the tread area only, to stabilize the tread and increase tread life and handling. Belt material can consist of the following: steel mesh; nylon; rayon; fiberglass; aramid. All radial tires are belted. Inner liner. The inner liner is the soft rubber lining (usually a butyl rubber compound) on the inside of the tire that protects the body plies and helps provide for self-sealing of small punctures [1]. References: 1. James, D.H. Automotive technology / D.H. James. – Principles, Diagnosis, and Service. – Forth edition. – 2012. 217 УДК 004.31-181.48. Kabushkin Ph., Slesarenok E. Inside a CPU Belarusian National Technical University Minsk, Belarus Whether you are using a desktop PC, a laptop, or even a smartphone, the central processing unit is by far the most important piece of hardware in it. The central processing unit or simply the processor is responsible for utilising and managing all the other computer components and, first of all, running your programs and performing computing tasks. Let’s take a closer look at the primary element of every modern computer. The processor itself represents a specifically processed piece of electronic grade silicon, which is received from sand, by melting it and cleaning from any impurities. The silicon ingots are then cut into thin silicon disks, which are called wafers. After that, the wafer gets polished to a flawless and perfectly smooth surface. The following steps are carried out by nano technology, which requires the material to be perfectly clean from any microbes or dust, and the process to be carried out in special, clean rooms, which are being maintained at a 99% clean condition, as even the tiniest piece of dust may ruin the wafer. The wafers are then being covered with a layer of special chemical called photoresist. This chemical becomes solvent, when exposed to ultraviolet light. After that the whole surface of wafer gets exposed to UV-light through a special lens, which has a transistor shape engraved on it, the transistor is a basic electronic component, which allows to control the energy current flow through a circuit. The lens is very small and its focal point makes the UV-ray 4 times smaller as it reaches the wafer and reacts with photoresist. This resolves an imprint up to 10 nanometers small. After that, the photoresist that 218 has interacted with the light can be washed away. The remaining photoresist protects the silicon from etching whereas the areas that were exposed to light are being etched away with chemicals. After gaining desired pattern the wafer gets ionized, This procedure applied to silicon allows it to alter the energy flow, forming thousands of millions transistors of a very small size. Then a layer of insulation is being added on top and holes for connections are being etched in it. Next, the wafer is placed in copper sulphate solution, ions of copper travel from positive charged anode to negative charged cathode, which is represented by wafer surface, creating a thin layer of copper on top, this process is called electroplating. Then the excess copper is being etched, resolving wires, that connect multiple transistors in logical gates, memory and computing modules. The way the wires are arranged is determined by the CPU architecture. After that, the wafer gets cut into individual pieces, called dies, and every piece is being tested by running algorithms through all of its connections, if the response is wrong the die gets discarded. Finally the die is mounted on the interface panel, which allows the CPU to interact with other hardware using regular size wiring. The CPU crystal is usually divided into into cores, the set of logical circuits, that can perform a single calculation at a time using binary code. The cores have individual core controllers, which set the instructions for their core. And the data that is currently being used or achieved is stored in the cache. Various CPUs have various specifications depending on their architecture. The clock speed, measured in Ghz determines, how many instructions can a single core complete in a second. The number of cores, shows how many instructions can your processor carry out simultaneously. The tdp, measured in watts determines how much energy is your CPU consuming, and how much heat it emits. The development of CPUs continues today. The individual transistors became smaller. The clock speeds rise. And more cores are being provided in newer CPUs. 219 УДК 621.3.047 Kapustsinski A., Khomenko S. Data Сenters’ Electric Power Supply Belarusian National Technical University Minsk, Belarus Large-scale computer systems have been around for a while, and many people are already familiar with the term data center. In the 1940s, computers were so large that individual rooms had to be specially set aside to house them. Even the steady miniaturization of the computer did not initially change this arrangement because the functional scope increased to such an extent that the systems still required the same amount of space. Even today, with individual PCs being much more powerful than any mainframe system from those days, every large-scale operation has complex IT infrastructures with a substantial amount of hardware – and they are still housed in properly outfitted rooms. Depending on their size, these are referred to as server rooms or data centers. Data centers are commonly run by large companies or government agencies. However, they are also increasingly used to provide a fast-growing cloud solution service for private and business applications. Data center preferably consists of a well-constructed, sturdy building that houses servers, storage devices, cables, and a connection to the Internet. In addition, the center also has a large amount of equipment associated with supplying power and cooling, and often automatic fire extinguishing systems. An indicator of the security level is provided by the tier rating as defined by the American National Standards Institute (ANSI). This proprietary rating system begins with Tier I data centers, which are basically warehouses with power, and ends 220 with Tier IV data centers, which offer 2N redundant power and cooling in addition to a 99.99% uptime guarantee. A Tier 1 data centre can be seen as the least reliable tier due to the fact that capacity components are non-redundant as well as the distribution path being a single, non-redundant path and as such, if a major power outage or disaster occurs, the equipment is more likely to go offline as there are no backup systems in place to kick in if any issues do occur. Tier 1 data centers are appropriate for:  companies with a passive web marketing presence,  small internet based companies with no customer support or e-commerce facilities on-site. Tier 2 data centres are considerably more reliable than Tier 1 data centres although they can be subject to problems with uptime. To achieve Tier 2, the facility has to meet the criteria achieved with a Tier 1 data centre, as well as ensuring that all capacity components are fully redundant. Tier 2 data centers are appropriate for:  Internet based companies who can cope with occasional downtime and will incur no penalties for this,  companies that do not run 24/7, allowing time for issues to be resolved,  higher intensity data driven servers such as model imaging programs. Tier 3 data centres are commonly seen as the most cost effective solutions for the vast majority of medium to large businesses, with availability topping 99.98%, ensuring minimal downtime. To put this figure in perspective, this means that your equipment should see a maximum of two hours of downtime on an annual basis. Tier 3 data centres have to meet all of the requirements of Tiers 1 and 2 as well as ensuring all equipment is dual-powered and has multiple uplinks. Some facilities also offer some fully fault-resistant equipment, although to achieve Tier 4, all equipment including HVAC, 221 servers, storage, chillers and uplinks must be fully fault- resistant. This tier is generally considered as Tier 3+ in the marketplace. Tier 3 data centers are appropriate for:  companies with a worldwide business presence,  companies that require 24/7 operational hours,  organisations that require consistent uptime due to financial penalty issues,  e-commerce and companies running full online operations,  call centres,  VOIP companies,  companies with heavy database driven websites,  companies that require a constant web presence. A Tier 4 data centre is generally considered the most expensive option for businesses. Tier 4 data centres adhere to all the requirement of Tiers 1, 2 and 3 as well as ensuring that all equipment is fully fault-resistant. This is achieved by creating physical copies of all essential equipment, otherwise known as N+N. Tier 4 data centers are appropriate for:  large, multinational companies,  major worldwide organisations. Fig.1 – Data centers’ availability 222 Fig.2 – Data centers’ downtime Data centers are connected to two separate grid sectors operated by the local utility company. If one sector fails, then the second one will ensure that power is still supplied. The diesel motors are configured for continuous operations and are always in a preheated state so that they can be started up quickly in the event of an incident. It only takes an outage in just one of the external grid sectors to automatically activate the generators. Within the data center, block batteries ensure that all operating applications can run for 15 minutes. This backup system makes it possible to provide power from the time a utility company experiences a total blackout to the time that the diesel generators start up. The uninterruptible power supply (UPS) also ensures that the quality remains constant. It compensates for voltage and frequency fluctuations and thereby effectively protects sensitive computer electronic components and systems. A redundantly designed power supply system is another feature of the data center. This enables one to perform repairs on one network, for example, without having to turn off servers, databases, or electrical equipment. Several servers or storage units have multiple, redundant power supply units, 223 which transform the supply voltage from two grid sectors to the operating voltage. This ensures that a failure of one or two power supply units does not cause any problems. Data centers’ erection is very expensive. However, these costs are justified. Interruption in power supply of the bank leads to a large monetary damage. According to the research in 2016, the average cost of a data center outage has steadily increased from $505,502 in 2010 to $740,357 in 2016. Maximum downtime costs increased 32 % since 2013 and 81 % since 2010. Maximum downtime costs for 2016 are $2,409,991. Fig.3 – Data centers’ outages cost Fig.4 – Total cost of unplanned outages by industry in 2016 From these arguments it could be concluded that investment in data centers’ construction leads to decreasing costs of electric power supply outages. 224 Gas 80% Oil 11% Renewable energy sources 9% УДК 620.91 Papkova N., Khomenko S. Alternative Energy Potential of the Republic of Belarus Belarusian National Technical University Minsk, Belarus For a country which seeks independence the problem of the security of energy supply should be a key question. Reducing conventional energy sources makes today’s society use the energy resources much more carefully and efficiently, look for and use alternative energy sources more actively, control climate changes and environmental pollution, etc. Using the latest technologies and innovative approaches is vital for these areas; consequently energy is one of the priorities of science and technology development in most countries inside and outside the EU, including Belarus [1]. Figure 1 shows the structure of primary energy resources for the Republic of Belarus is shown [2]. Figure 1 – The structure of primary energy resources for the Republic of Belarus According to the Figure 1, 91% of all primary energy recourses that are used by our power plants are not renewable. Belarus is poorly endowed with fossil fuel sources such as oil 225 and gas that together are considered to be the main fuel for Belarusian power plants. Therefore, Belarus has to import more than 80% of consumed energy resources, mainly from Russia [1]. Despite the decline in energy intensity of gross domestic product (GDP), energy demand is increasing every year. That is why it is highly relevant to use energy efficient technologies. The main alternative energy sources in our country are run-of-the-river plants, biogas plants, wind and solar plants. Hydroelectric engineering in Belarus is represented by 51 hydroelectric power stations in service with a total capacity of 34.6 MW. About 76% of all hydroelectric power stations capacity falls at 23 hydroelectric power stations, a total capacity of which is 26.3 MW [1]. All main and minor rivers in Belarus are used for power generation. As stated in weather data, in the Republic of Belarus there are 250 overcast days, 85 partly cloudy days and only 30 clear days. The average annual solar energy input on the surface taking into account nights and cloudiness amounts to 243 kcal per 1 cm3 per day equal to 2.8 kWh per m2 a day and if we take into consideration energy efficiency rating of 12 % it amounts 0.3 kWh per m2 a day [3]. Such figures are making solar plants not efficient in the country now. With new technologies efficiency output of solar panels can be increased. At the moment on the territory of Belarus there are more than 50 wind turbines. They are installed in Grodno, Minsk, Vitebsk and Mogilev Regions. Experts say that windmills will be repaid within five years at an average annual rate of 6–8 m/s [4]. Average design wind speed map at a height of 100 m is shown in figure 2. The areas with the strongest winds are in dark blue colour, the light blue – for the worst conditions. Our wind farm potential is higher than in Germany [2]. 226 Figure 2 – Average design wind speed at a height of 100 m Wind power is often criticized that it is not competitive with the conventional forms of energy and it has to be subsidized. However, Belarus has no gas and no significant water resources on its territory and that means that wind energy development is a great contribution to the future of our country. And let’s not forget that wind energy is not only renewable but also clean. According to the Levelised Cost of Electricity (LCOE) method that allows comparing power plants with different power generation and costing structures onshore and offshore wind farms are the cheapest and effective way to produce both electricity and heat [6]. In recent years, much work has been done on including local fuel and energy resources together with renewable energy 227 sources into the fuel balance. 840 areas for wind turbines placement have been determined on the territory of the Republic of Belarus. Among the inspected areas five of them were selected as priorities which are located in Grodno, Vitebsk and Minsk regions. According to the expert estimation, wind turbines with total capacity of 115 MW can be located in these areas. The unique weather conditions and geographical location of the Republic of Belarus can help our country to reach a high level of energy self-sufficiency. References: 1. Alternative energy of the Republic of Belarus [Electronic resource]. – Mode of access: http://investinbelarus.by/docs/ 2016%20Renewable%20energy.pdf – Date of access: 10.03.2018. 2. International Renewable Energy Agency. Statistics Time Series. Republic of Belarus [Electronic resource]. – Mode of access: http://www. http://resourceirena.irena.org. – Date of access: 10.03.2018. 3. IEA Statistics: Belarus [Electronic resource]. – Mode of access: http://www.iea.org/countries/nonmembercountries/ belarus/, 27.12.2013. – Date of access: 10.03.2018. 4. The Dialog. Wind power in Belarus [Electronic resource]. – Mode of access: http://the-dialogue.com/en/en16-wind-power- in-belarus/. – Date of access: 10.03.2018. 5. The Wind Power. A comprehensive database of detailed raw statistics [Electronic resource]. – Mode of access: http://www.thewindpower.net. – Date of access: 10.03.2018. 6. Stolzenberger, C. Levelised Cost of Electricity / C. Stolzenberger // LCOE 2015 [Electronic resource]. – Mode of access: https://www.vgb.org/ en/lcoe2015.html. – Date of access: 10.03.2018. 228 УДК 656.225.073.9 – 025.71:629.33 Savenkov A., Khomenko S. A New Way of Transporting Cars by Rail Belarusian National Technical University Minsk, Belarus There are issues of transporting cars by rail. Transportation of cars is carried out by both automobile and railway transportation. In railway transport, cars are transported by means of specialized car-cars. Car-wagons are special purpose freight cars for transporting cars, trailers, minibuses and trolleybuses. They represent a covered car or a platform, with two tiers to increase the capacity. They have a relatively large mass at low carrying capacity. There are different types of car-car wagons: • covered wagons for transporting cars; • car-grids for transporting cars; • vehicle wagons for transporting cars (platforms have a significant drawback – they do not protect the cargo from external influence, including vandalism); • special containers SP Stac-Pac (cars are loaded into special containers, which are then loaded onto the railway platform) [1]. In all such cars an average of 4-6 to 12-14 cars are placed. Loading is carried out in 10-15 linked cars, prepared for loading. The end doors are opened in them, and cars, entering from the flyover, consistently fill all the cars. For this, unimpeded travel of cars throughout the structure is ensured. As an alternative way of transporting cars, a different way of transporting vehicles developed in the USA may be proposed. In 1971, General Motors Corporation, together with 229 the largest US railway company Union Pacific, created a new type of a car for transporting cars – Vert-A-Pack. The new car allows you to transport just 30 cars instead of 4-14, placed in a standard car or a platform. Cars in this way of transportation are placed vertically, in 2 rows, with the hoods down [2]. Vehicles have four removable lifting eyes mounted on the chassis. When the door of a car rises, the eyes cling to the hooks on the doors and the machine simply hangs on them under its own weight. When a car is fully loaded, and its doors are closed, the cars inside are side by side, the roof to the roof – there is almost no free space inside. The doors of a car are closed with a conventional forklift [3]. This method of transportation allows to increase the number of transported cars up to 30 pieces, which is 2-7 times more than the standard one. Increasing the useful volume of the used car, as a result, the cost of transportation of one car decreases, since if we load more cars into one car, not 4-14, but 30, then the cost of transporting one car will be divided into more cars (30) and thus transportation of one car will be less. References: 1. Vert-A-Pac [Electronic resource]. – Mode of access: http://chevyvega.wikia.com/wiki/Vert-A-Pac. – Date of access: 16.03.2018. 2. The amazing vert-a-pack autorack car transporter! [Electronic resource]. – Mode of access: https://www.zeroto60times.com/2014/02/vert-a-pac-autorack- car-transporter/. – Date of access: 16.03.2018. 3. A look back in time: The GM/Southern Pacific Vert-A-Pac [Electronic resource]. – Mode of access: https://www.railwayage.com/mechanical/freight-cars/a-look- back-in-time-the-gm-southern-pacific-vert-a-pac/. – Date of access: 16.03.2018. 230 УДК 62-83:811.111 Tsybulkin P., Yalovik E. Electric Drive at the Basis of a Permanent Magnet Motors and Methods of Controlling Them Belarusian National Technical University Minsk, Belarus In the context of industry orientation on energy-saving technologies, more and more attention is paid to energy- efficient electric drives. One of these electric drives is an electric drive based on permanent magnet synchronous motors (PMSM). Permanent magnets have not been used for electrical machines for a long time because the development of the permanent magnet materials was not mature until mid-20th century. After the invention of Alnico and Ferrite materials, permanent magnets were widely used for DC machines in small power applications, such as automobile auxiliary motors. Recently, the improvement of the quality of permanent magnet materials and the technical advances of the control methods allow replacing induction machines with permanent magnet machines in many industrial areas [1]. With the development of permanent magnet materials and the techniques driving an electric machine, the use of PMSMs has increased in many industrial areas by replacing induction motors because of PMSMs advantages in efficiency and size [1]. However, permanent magnet motors tend to be more expensive than AC induction motors and have been known to be more difficult to start up than AC induction motors. Permanent Magnet motor drives are developed for many applications such as machine tools, compressors, pumps, 231 friction welding units, turbine generators. The use of high speed motor drives is essentially aimed at removing the mechanical gear and reducing the overall system dimensions. The permanent magnet motor drives are attractive for high speed operations when variable-speed is required. They can be designed in different forms and exhibit high efficiency in a wide range of operation [2]. Depending on the requirements of each application, different methods can be used to control PMSMs. This article introduces scalar control as a simple control method, which is suitable for low-cost drive systems, and vector control as a more advanced option, which is well-suited for applications that demand higher dynamic performance [3]. In drive systems where simple, low-cost control is desired and where reduced dynamic performance is acceptable, open-loop control methods can be used. Typical applications of such systems include pump and fan drives. Open-loop control methods (or scalar control methods, as they are often called) exist in different variations, which include V/f schemes. Despite their simplicity and their ability to operate over a wide speed range, it has been found that the performance of open- loop methods often depends on the motor parameters and the load conditions of the system. Such methods can experience power swings within specific speed ranges, which might cause the motor to lose synchronism. Furthermore, the behaviour of some open-loop schemes is heavily dependent on the selected parameters of the controller. The selection of the control settings for these schemes is often based on a trial-and-error approach and is therefore quite time-consuming [3]. For more advanced drive systems, which require higher dynamic performances, vector control is a more appropriate option than scalar control. Demanding applications that need vector control can be found, for instance, within the automotive industry. Vector control allows the torque and the flux of the 232 PMSM to be controlled separately from each other, through a control structure which is similar to that of a separately excited DC machine. This decoupled control results in the precise and efficient regulation of the motor. However, a major issue with vector controllers is that their operation requires information about the rotor position and the speed of the PMSM. The most direct approach for obtaining this information is the use of mechanical sensors on the shaft of the PMSM [4]. Despite the variety of modern types of PMSM and methods of controlling them, they continue to develop, due to the orientation of the industry towards energy-saving technologies and the expansion of the scope of PMSM. References: 1. Seong, T.L. Development and Analysis of Interior Permanent Magnet Synchronous / T.L. Seong. – PhD diss. – University of Tennessee. – 2009. – 190 p. 2. Bianchi, N. High Speed Drive Using a Slotless PM Motor / N. Bianchi. – IEEE Transactions on power electronics. – № 4. – 2006. 3. Stellas, D. Sensorless scalar and vector control of a subsea PMSM / D. Stellas. – Chalmers University of Technology, Goteborg. – 2013. 4. Mishra, A. Modeling and implementation of vector control for PM synchronous motor drive / A. Mishra, J. Makwana. – International Conference on Advances in Engineering. – Science and Management (ICAESM). – 2012. 233 УДК 006.9:811.111 Herasimionak A., Yalovik E. Importance of Implementing a Measurement Management System in Companies of the Republic of Belarus Belarusian National Technical University Minsk, Belarus Measurement management system is a set of interrelated or interacting elements necessary to achieve metrological confirmation and continual control of measurement processes. An effective measurement management system ensures that measuring equipment and measurement processes are fit for their intended use and is important in achieving product quality objectives and managing the risk of incorrect measurement results. The objective of a measurement management system is to manage the risk that measuring equipment and measurement processes could produce incorrect results affecting the quality of an organization’s product. The methods used for the measurement management system range from basic equipment verification to the application of statistical techniques in the measurement process control [1]. A measurement management system is a completely new phenomenon for Belarusian companies. It hasn’t been implemented in our country yet. Nevertheless, the measurement management system has already been successfully implemented in many foreign countries. This system has a similar structure to the quality management system. The measurement management system, in its essence, is a subsystem of the quality management system. However, the measurement management system has a narrower application area. 234 The implementation of the measurement management system in a company includes the following steps: 1) investigation of existing inconsistencies of the structure, processes and resources of the metrological service; 2) pattern generation of the processes of the measurement management system in a company; 3) substantiation of the structure of the metrological service; 4) justification of the resources required for the measurement management system; 5) development of a procedure for planning, providing, managing and improving a single method of measuring, controlling and testing within the measurement management system. In companies where many different measurements are carried out it is advisable to introduce a measurement management system. The development and implementation of the measurement management system in companies will enable them to achieve the following results: 1) the increase of the reliability of measurement results at all stages of the product life cycle; 2) the optimization of the quantity of measuring equipment and personnel. These results will help to reduce production costs by minimizing the volume of poor-quality products and to increase consumer confidence in product quality. Therefore, the introduction of a measurement management system in companies of the Republic of Belarus has great economic importance. References: 1. International Standard Measurement Management Systems. – Requirements for measurement processes and measuring equipment: ISO 10012:2003. – First edition. – 2003.