IAES International Journal of Robotics and Automation (IJRA) Volume 9, issue 2, Jun. 2020

Page 1

ISSN: 2089-4856

IJRA

International Journal of

Robotics and Automation Editor-in-Chief Mohd Fua'ad Rahmat, Universiti Teknologi Malaysia, Malaysia

Managing Editors Arcangelo Castiglione, University of Salerno, Italy Paolo Boscariol, Università degli Studi di Padova, Italy Shehzad Ashraf Chaudhry, International Islamic University, Pakistan Tole Sutikno, Universitas Ahmad Dahlan, Indonesia Wai Lok Woo, Northumbria University, United Kingdom

Associate Editors A. P. James, Indian Institute of Information Technology and Management-Kerala, India Abdalhossein Rezai, Isfahan University of Technology, Iran Abdullah Iliyasu, Tokyo Institute of Technology, Japan Addisson Salazar, Universidad Politécnica de Valencia, Spain Ahmed Elmisery, University of South Wales, United Kingdom Ahmed Toaha Mobashsher, University of Queensland, Australia Andrew Lowe, Auckland University of Technology, New Zealand Ankit Chaudhary, Northwest Missouri State University, United States Antonios Gasteratos, Democritus University of Thrace, Greece Ashok Vaseashta, International Clean Water Institute, United States Attia A. El-Fergany, University of Zagazig, Egypt B. B. Gupta, National Institute of Technology Kurukshetra, India Badrul Hisham Ahmad, Universiti Teknikal Malaysia Melaka, Malaysia Bin Cao, Harbin Institute of Technology, China Chaomin Luo, Mississippi State University, United States Christoph Hintermüller, Johannes Kepler University, Austria Chrysovalantou Ziogou, Chemical Process and Energy Resources Institute, Greece

Daniel Watzenig, Graz University of Technology, Austria David Luengo, Universidad Politecnica de Madrid, Spain Debashis De, Maulana Abul Kalam Azad University of Technology, India Dedy Wicaksono, Swiss German University, Indonesia Dhananjay Singh, Hankuk University of Foreign Studies, Korea Emilio Jiménez Macías, University of La Rioja, Spain Enrico Tronci, Sapienza University of Rome, Italy Ezra Morris Gnanamuthu, Universiti Tunku Abdul Rahman, Malaysia Felix Albu, Valahia University of Targoviste, Romania Francesca Guerriero, University of Calabria, Italy George Suciu, University Politehnica of Bucharest, Romania Giovanni Pau, Kore University of Enna, Italy Grienggrai Rajchakit, Maejo University, Thailand Haikal El Abed, Technical Trainers College, Saudi Arabia Hamed Mojallali, University of Guilan, Iran Hamidah Ibrahim, Universiti Putra Malaysia, Malaysia Hari Prabhat Gupta, Indian Institute of Technology (BHU) Varanasi, India Harikumar Rajaguru, Bannari Amman Institute of Technology, India Hongjia Li, Chinese Academy of Sciences, China (continued on the next page)

Published by:

Institute of Advanced Engineering and Science (IAES) Website: http://ijra.iaescore.com Email: ijra@iaescore.com, ijra.iaes@gmail.com


Associate Editors (cont.) Horst Hellbrück, University of Applied Sciences Lübeck, Germany Idris bin Ismail, Universiti Teknologi PETRONAS, Malaysia Imran Shafique Ansari, University of Glasgow, United Kingdom Jia-Chin Lin, National Central University, Taiwan Jianjun Ni, Hohai University, China John Strassner, Huawei, United States José Alfredo Costa, Universidade Federal do Rio Grande do Norte, Brazil Juan-José González-de-la-Rosa, University of Cadiz, Spain Kang Song, Qingdao University, China Ke-Lin Du, Concordia University, Canada Khairulmizam Samsudin, Universiti Putra Malaysia, Malaysia Larbi Boubchir, University of Paris 8, France Lin Wu, Hefei University of Technology, Australia Luca Di Nunzio, University of Rome "Tor Vergata", Italy M. Hassaballah, South Valley University, Egypt Mahdi Imani, George Washington University, United States Makram Fakhry, University of Technology, Iraq Mario Alberto Ibarra-Manzano, Universidad de Guanajuato, Mexico Md Jan Nordin, Universiti Kebangsaan Malaysia, Malaysia Messaoud Amairi, National Engineering School of Gabes, Tunisia Mihaela Albu, Politehnica University of Bucharest, Romania Ming-Fong Tsai, National United University, Taiwan Mohammad Abdullah, University Tun Hussein Onn Malaysia, Malaysia Mohammadali Behrang, FARAB Co., Iran Mohd Amri Md Yunus, Faculty of Electrical Engineering, Malaysia Mohd Ashraf Ahmad, Universiti Malaysia Pahang, Malaysia Mohd Ridzuan Ahmad, Universiti Teknologi Malaysia, Malaysia Mohd Zubir MatJafri, Universiti Sains Malaysia, Malaysia Mostafa Abdulghafoor Mohammed, Great Imam University College, Iraq Nabil Derbel, University of Sfax, Tunisia Nagender Suryadevara, University of Hyderabad, India Nicola Ivan Giannoccaro, University of Salento, Italy Nicola Pasquino, University of Naples Federico II, Italy

Norizam Sulaiman, Universiti Malaysia Pahang, Malaysia Nuno Rodrigues, Instituto Politécnico de Bragança, Portugal Nuri Yilmazer, Texas A&M University-Kingsville, United Kingdom Paolo Crippa, Università Politecnica delle Marche, Italy Pascal Lorenz, University of Haute Alsace, France Peng Zhang, Stony Brook University, United States Peter Beling, University of Virginia, United States Pierre Melchior, Bordeaux-INP, France Priya Ranjan, SRM University, India Pushpendra Singh, JK Lakshmipat University, India Rajesh Kumar, Malaviya National Institute of Technology, India Ranathunga Arachchilage Ruwan Chandra Gopura, University of Moratuwa, Sri Lanka Ricardo Rabelo, Federal University of Piaui (UFPI), Brazil Riza Muhida, University of Bandar Lampung, Indonesia S Surender Reddy, Woosong University, Korea S. Parasuraman, Monash University Malaysia, Malaysia Samir Ladaci, National Polytechnic School of Constantine, Algeria Sang C. Lee, Wellness Convergence Research Center, Korea Santhanakrishnan Anand, New York Institute of Technology, United States Sayed Chhattan Shah, Hankuk University of Foreign Studies, Korea Sihai Zhang, University of Science and Technology of China, China Simona V. Halunga, University Politehnica of Bucharest, Romania Siti Anom Ahmad, Universiti Putra Malaysia, Malaysia Stavros Ntalampiras, Politecnico di Milano, Italy T. S. Gunawan, International Islamic University Malaysia, Malaysia Tai-Chen Chen, MAXEDA Technology, Taiwan Thinagaran Perumal, University Putra Malaysia, Malaysia Tossapon Boongoen, Mae Fah Luang University, Thailand Vicente Garcia Diaz, University of Oviedo, Spain Wael Badawy, Nile University, Egypt Yanzheng Zhu, Huaqiao University, China Yilun Shang, Northumbria University, United Kingdom Youssef Errami, Chouaib Doukkali University, Morocco


IJRA

International Journal of

Robotics and Automation

General concepts of multi-sensor data-fusion based SLAM Jan Klečka, Karel Horák, Ondřej Boštík

63-72

Fuzzy logic controller design for PUMA 560 robot manipulator Abdel-Azim S. Abdel-Salam, Ibrahim N. Jleta

73-83

Robots for search site monitoring, suspect guarding, and evidence identification Yi-Chang Wu, Jih-Wei Lee, Huan-Chun Wang

84-93

Particle swarm optimization algorithms with selective differential evolution for AUV path planning Hui Sheng Lim, Shuangshuang Fan, Christopher K.H. Chin, Shuhong Chai, Neil Bose

94-112

Dynamics of trunk type robot with spherical piezoelectric actuators Aistis Augustaitis, Vytautas Jurėnas

113-122

Control system design of duct cleaning robot capable of overcoming L and T-shaped ducts Myeong In Seo, Woo Jin Jang, Junhwan Ha, Kyongtae Park, Dong Hwan Kim

123-134

Inverse kinematic analysis of 3 DOF 3-PRS PM for machining on inclined prismatic surfaces Hishantkumar Rashmikantbhai Patel, Yashavant Patel

135-142

Designing and testing of a smart firefighting device system (LAHEEB) Yousef Samkari, Kamel Guedri, Mowffaq Oreijah, Shadi Munshi, Sufyan Azam

143-152

Responsibility of the contents rest upon the authors and not upon the publisher or editor.

IJRA

Vol. 9

No. 2

pp. 63 - 152

June 2020

ISSN 2089-4856



International Journal of Robotics and Automation (IJRA) Vol. 9, No. 2, June 2020, pp. 63∼72 ISSN: 2089-4856, DOI: 10.11591/ijra.v9i2.pp63-72

r

63

General concepts of multi-sensor data-fusion based SLAM Jan Klečka, Karel Horák, Ondřej Boštı́k Department of Control and Instrumentation at Brno University of Technology, Czech Republic

Article Info

ABSTRACT

Article history:

This paper is approaching a problem of Simultaneous Localization and Mapping (SLAM) algorithms focused specifically on processing of data from a heterogeneous set of sensors concurrently. Sensors are considered to be different in a sense of measured physical quantity and so the problem of effective data-fusion is discussed. A special extension of the standard probabilistic approach to SLAM algorithms is presented. This extension is composed of two parts. Firstly is presented general perspective multiple-sensors based SLAM and then thee archetypical special cases are discuses. One archetype provisionally designated as ”partially collective mapping” has been analyzed also in a practical perspective because it implies a promising options for implicit map-level data-fusion.

Received Sep 30, 2019 Revised Oct 06, 2019 Accepted Feb 18, 2020 Keywords: Data fusion Localization Mapping Partially collective mapping Simultaneous localization and mapping (SLAM)

This is an open access article under the CC BY-SA license.

Corresponding Author: Jan Klečka, Department of Control and Instrumentation, Brno University of Technology, Technická 12, Brno, Czech Republic Email: klecka@feec.vutbr.cz

1.

INTRODUCTION After more than thee decades of research the Simultaneous Localization and Mapping (SLAM) algorithms provide still a variety of open topics for further development as we can see e.g. survey by C. Cadena’s et al. [1] or in critique by Huang et al. [2]. These algorithms are designed to continuously process given observations of surroundings to provide observer’s current position (or sometimes whole trajectory) and map of observed environment. Such information is unsubstitutable feedback for practically any navigation task e.g. trajectory planning or complex movement execution. There can be found many application fields for SLAM algorithms. We chose to underline only thee which, as we feel, are nowadays widely discussed. Navigation of autonomous cars as discussed by Bresson et al. [3], various industry 4.0 tasks e.g. Beul presented warehouse inventory check [4] or augmented reality task as shown by Klein and Murray [5]. For several years have we been dealing with SLAM based on various sensor data-fusion and this paper aims to report some general findings we have done. Our original methodology has been originally mainly inductive process. We originally began with the concept of building map using simple geometrical entities to approximate in piece-wise manner surfaces of solids that are creating the mapped environment and during the development, we iteratively generalize this specific concept until it fits the standard probabilistic SLAM algorithms theory. However following descriptions are conducted in a more comprehensible deductive process where we start with the general and work our way to the specific. We have been trying to use common notation customs although for maximal clarity of following descriptions we quickly state used rules. Matrices and vectors symbols are bold e.g. A, x where uppercase Journal homepage: http://ijra.iaescore.com/index.php/IJRA


64

r

ISSN: 2089-4856

is used for matrices and lowercase for vectors. Bold uppercase symbols are also used for sets which also has lower index show range of their cardinality e.g. Z0:N = z0 , z1 , · · · , zN . Scalar symbols are italics e.g. N. Subscripts are used to express specific element of a larger collection e.g. zn is a realization of z in time t = n. Superscripts in square brackets symbolize specific modality e.g. z[k] is z associated with k-type sensor. For functions is used a normal font e.g. h(·) is function named h. 2.

RELATED WORKS As we already indicate in introduction except for concept data-fusion based SLAM we also dealing with the concept of SLAM using map representation in the form of a collection of geometric entities so we split this section into respective subsections. 2.1. Data-fusion in context of SLAM A substantial amount of papers that mention keyword fusion in the context of SLAM algorithms deals with processing observations from a single RGB-D camera (or often even specifically the Microsoft Kinect). Examples of such works are: KinectFusion algorithm presented by Newcombe et al. [6], algorithm Fusion++ by McCormac et al. [7] or ElasticFusion by Whelan et al. [8, 9]. Several teams reported also about SLAM based on observations from multiple sensors. For example with processing data from custom made sensory head equipped with two CCD cameras, two thermo-cameras and range finder has dealt Burian et al. [10] - data from rangefinder is used depth reference for camera images and therefore can be enhanced by using mathematical models of individual cameras. Fang et al. presented a SLAM capable system with CCD camera and sonar [11] which improves the reliability by utilizing featurelevel data-fusion. Let’s notice that in so far listed algorithms the data-fusion is conducted always prior to SLAM iteration and so the SLAM algorithms then process already fused data. Notice moreover that various modalities are typically conceptually in mutually nonequivalent status. The dept perception modality is typically in unsubstitutable position and other modalities (like color) are used to increase the robustness of the whole solution or just for map presentation purposes. 2.2. Map as a set on non-point geometrical entities There can be found some papers that preset solutions to SLAM problems that use representation of map in the form of a collection of geometrical entities. For example, lidar-based 2D SLAM that represents the environment by a set of lines is shown by Garulli et al. in [12] and also by Choi et al. [13]. Example of lidar-based 3D SLAM which uses plane features is presented by Ulas and Temeltas [14]. These concepts aren’t specific only for Lidar. Zhou et al. [15] and Uehara et al. [16] are reported vision-based SLAM algorithms that utilize line features. Yang et al. [17] shows that utilizing planes can improve robustness of monocular SLAM against standard strictly point-based approaches . There can be found also reports that approach only partial problems like segmentation. For example algorithm for approximation point 2D cloud by collection of lines by Jelinek et al. [18] or detection of planes in 3D point-cloud by Hulik et al. [19] and also by Pathak et al. [20]. 3.

PROBABILISTIC APPROACH In this section, the mathematical background of fusion-based algorithms is presented. We present the problem from a probabilistic perspective to urge the maximal generality of given formulas. Even though some concretization had been made. We assumed strictly the static environment and from perspective of estimated trajectory, we provide solution to two variants - the ”online” SLAM that aims only to estimate the most recent pose and the ”full” SLAM which provide a way to estimate the whole trajectory. 3.1. Standard theory Presented description is equivalent to thous given in standard SLAM-oriented publications e.g. survey by Durrant-White et al. [21] or book Probabilistic robotics by Thrun et al. [22]. Let’s have some observer which moves in an environment given by parameterization m and during its movement is the observer repeatedly conducting observations z. Observer relation to this environment, e.g. its position and orientation, is given by state x. Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 63 – 72


Int J Rob & Autom

r

ISSN: 2089-4856

65

Observations describe the observer surroundings and are degraded by noise. Therefore it can be defined by a conditional probability distribution that is usually called the observation model: p(zn |xn , m)

(1)

Because of the nature of the observer entity, the state vector will most probably be subjected to some dynamic that bounds its change between observations. This link may be dependent on some observable quantity u and it’s also stochastic so can be defined by conditional probability distribution called motion model: p(xn |xn−1 , un )

(2)

Because the stochastic nature of both observation and motion model the SLAM problem lies from the general point of view in defining a probability distribution of a pose and a map conditioned by the conducted observations: p(xN , m|Z0:N , U1:N ) (3) This distribution has to also represent our prior belief about the state and map distribution. Analytic solution of this problem can be found using Bayes formula as: p(xN , m|Z0:N , U1:N ) = ηp(zN |xN , m)p(xN , m|Z0:N−1 , U1:N )

(4)

where η is an arbitrary normalization constant and second term can be defined by propagation previous believe into current time using motion model: Z p(xN , m|Z0:N−1 ) = η p(xN−1 , m|Z0:N−1 )p(xN |xN−1 , uN )dxN−1 (5) Usually the realization of equation (4) is called the update step and realization of equation (5) is called a prediction step. This recurrent form of solution is standardly referred to as an ”online” SLAM and can be fairly straightforwardly seen as applicable to real-time process. The second frequently utilized form of SLAM solution is the so-called ”full” SLAM that is non-recurrent and aims at the description of whole trajectory distribution. p(X0:N , m|Z0:N , U1:N ) = η

N i ih Y p(xn |xn−1 , un ) p(x0 ) p(zn |xn , m)

N hY

(6)

n=0

n=0

3.2. General multi-sensor based SLAM Now, let’s consider that set of observations is composed of subsets and each subset contain only observations from one particular sensor modality [1] [2] [K] (7) Z0:N = Z01 :N1 , Z02 :N2 , · · · , Z0K :NK where any time indexes range 0k : Nk ⊂ 0 : N. Then each modality has its own unique particular observation model p(z[k] n |xn , m)

(8)

Motion model stays conceptually unchanged, we can assume the same form as in the general case. These eventualities do not change above mentioned equations dramatically. The only change lies in the substitution of general observation models for particular ones. Specifically, the update step of the online SLAM gonna look like this [k]

p(xN , m|Z0:N , U1:N ) = ηp(zN |xN , m)p(xN , m|Z0:N−1 , U1:N )

(9)

and the probability distribution of full variant will be in the following form p(X0:N , m|Z0:N , U1:N ) = η

N hY n=0

N ih Y i p(z[k] |x , m) p(x |x , u ) n n n n−1 n p(x0 )

(10)

n=0

It may look like no progress at all however that because we did not take into account that with additional modalities will be changing more things than just the observation model. General concepts of multi-sensor data-fusion based SLAM (Jan Klečka)


66

r

ISSN: 2089-4856

x0

x1

x2

x3

z0[1]

z1[2]

z2[1]

z3[2]

m[1]

m[2]

Figure 1. Conditionally independent algorithms

3.3. Special cases multi-sensor based SLAM In this section, we specify the above-mentioned formulas by assuming specific structure derived from mutual relations of different modality observations. Specifically, we analyze thee cases that we consider to be archetypes from which the real situations can be composed of. 3.3.1. Conditionally independent algorithms Let’s consider that given modalities (or at least used style of their abstraction) does both not allow forming any cross-modality quantity that could represent a common map elements and in addition their observations are asynchronous in time of their capture - so each one belongs to different state of the observer (see Figure 1). That will leads to separation of the map parameterization m into a set of sensor-specific representations m = M[1:K] = m[1] , m[2] , · · · , m[K] (11) where each particular map m[k] is independent of any observation z[l] . [l] [k] p(z[k] n |xn , m ) = p(zn ) ∀k 6= l

(12)

If we apply these rules to the recurrent SLAM equation we can in this case, alter them into a form where the update step is separable in terms of modality. So let’s notice that only the cross-modality link is in this case established by the motion model. The weaker the motion model, the closer the uni-modal parts are to mutual independency and in an extreme case, assuming that the motion model does not exist at all, this archetype leads to completely independent parallel SLAM algorithms. Generally, we can state that particular maps can be considered conditionally independent given the state. Data-fusion is in this case scheduled to postprocessing with no benefit to runtime. 3.3.2. Super-observation The second archetype is based on the assumption that the acquisition of the observations is conducted in a synchronized manner. So even though observer using multiple sensors their capturing times are synchronized and so all particular modalities observations always belongs to one single state realization x (see Figure 2). Under these assumptions, we can define the observation set as a collection of subsets that contain isochronous observations. [1:K] [1:K] [1:K] Z0:N = Z0 , Z1 , · · · , ZN (13) Because from an analytical perspective it is irrelevant whether the observation is vector or set, we can define the composed observation model and then apply the single-observation theory. (Z[1:K] |xn , m) = n

K Y

p(z[k] n |xn , m)

k=1

Let’s notice that data-fusion, in this case, takes place in a preprocessing step. Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 63 – 72

(14)


Int J Rob & Autom

r

ISSN: 2089-4856

x0

z0[1]

67

x1

z0[2]

z1[1]

z1[2]

m

Figure 2. Super-observation

x0

x1

x2

x3

z0[1]

z1[2]

z2[1]

z3[2]

r[1]

m[com]

r[2]

Figure 3. Partially collective mapping 3.3.3. Partially collective mapping The third and final archetype we presenting in this section is unique in its map composition. At least part of the map representation is common to all available modalities and so on its estimation participates all sensors (see Figure 3). Let’s assume that the map representation can be defined as the following collection: m = m[com] , r[1] , r[2] , · · · , r[K] (15) where m[com] is a common part of map (or just a common map) and all r[k] are modality specific remainder vectors. Combination of common map m[com] and a particular remainder vector r[k] can be interpreted as a particular map m[k] . So common map m[com] is dependent on every observation and remainder vectors r[k] are mutually conditionally independent. [k]

p(m[com] |x0:N , Z0k :Nk ) 6= p(m[com] |x0:N )∀k ∈ 1 : K

(16)

Data-fusion is in this case implicitly embedded into the SLAM algorithm. 4.

PRACTICAL ASPECT OF COMMON MAP By analysis of the above-mentioned archetypes, we concluded that the concept of the common map represents a promising way for the development of effective multi-sensor data-based SLAM algorithms because it implicitly enforces a high level of data fusion. However probabilistic approach to this concept is highly abstract and that’s why we devoted this section to more specific and practical aspects of this concept. There are two subsections following. In the first, we are dealing with specifics way to practically implement the concept of the common map which is composing it as parameters of a piecewise function that represent the surface of the observed environment. In the second subsection, we follow up the previous findings General concepts of multi-sensor data-fusion based SLAM (Jan Klečka)


68

r

ISSN: 2089-4856

into set requirements on the observation functions that lead to the categorization of real sensors accordingly to their utilizability in the context of geometrical-entities based collective map. 4.1. Geometrical-entities based collective map Continues function that approximates the surface of obstacles is, in our opinion, an advantageous thing to utilize for the common map definition because standard SLAM capable sensors always observe this quantity in some way. For example, there is a very low probability that data from Lidar, visible spectrum (vis) camera, thermal (IR) camera would share a substantial amount of feature points in terms of belonging to the same spacial points. However, what is highly probable is that these observations would describe the same planes and curves that form the environment surfaces. Let’s have an analytical formula for an observation model, where observation is a vector that in a spatially distinguished point-wise manner describes some quantity exhibited by points of the surrounding environment. [k] [k] [k] z[k] (17) n = h (xn , m , vn ) [k]

where h[k] is observation function, vn is noise vector that models stochasticity of the process. If we would know that some subsets of the observation elements belongs to specific geometrical-entity we can generally express this knowledge by some equality constraints Gi (m) = 0 where Gi is function that define constraints specific to i-th entity. For example, following constraint bounds the specific points to lie on the same line/plane Gi (m) = Mi 1 π i = 0

(18)

(19)

where π i is a vector of coefficients that defines line/plane and Mi matrix whose rows are spacial points that belongs to i-th entity. Parameters that define specific form of the constraint equation (in our example π i ) are elements that forms the common map m[com] . For practical applications, we also define a projection function g that is used in the optimization process for error evaluation. [k]

m[k] = gi (m[com] , r[k] )

(20)

this function have to be from general perspective modality specifics, however, usually, it would be very similar across all modalities. The consequence of map parametrization in this way is that dimensionality of the map is greatly reduced compared to the non-constraint case and this would very likely have positive effects on the optimization process as shown in [23, 24]. The last practical aspect we discuss in this subsection is the obvious problem that in the real-world scenarios point elements affiliation to specific geometrical entities is apriori unknown. Dividing single observations into parts where each describes the common entity is generally a segmentation problem and the probabilistic way to approach it is by statistical hypothesis testing. p Gi (m) = 0|Z0:N > α (21) where α is the significance level. This can be practically conducted by defining statistics that evaluates whether the reprojection error can be caused by observation noise ti = d h(X0:N , m, v = 0), Z0:N (22) and comparing it against given critical value ti < tcrit . Anyway, it is obvious that many testable hypotheses gonna be significantly higher then computational resources allow us to test, so necessary part of the segmentation algorithm has to be also a method which generates hypothesis to test. Experiment showing practical example of such algorithm can be find [25]. Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 63 – 72


Int J Rob & Autom

r

ISSN: 2089-4856

69

4.2. Sensors In perspective of above-mentioned theory, let’s analyze what properties have to the observation function meet to be compliant e.g. usable with it. Just for the formalism, we start with the obvious. Firstly, the mathematical model of the sensor has to be consistent with reality. Secondly, any sensor used as the primary source of data for the SLAM algorithm has to measure some spatially dependent quantity that is suitable to be mapped. This leads to a model’s ambiguity when state or map is unknown, however, combined knowledge about both state and map forms an information gain. p(z|x) = p(z|m) = p(z)

(23)

p(z|x, m) 6= p(z)

(24)

From perspective multiple-sensor based SLAM while assuming to have limited resources, it is reasonable also to consider whether all sensors will have a perceptible contribution to overall result. A form of the contribution is although in this context highly unclear. Generally, it can be viewed as any criterion that evaluates the result. However, we usually think about it as a noticeable improvement of a common map variance. Var p(m[com] |Z[1:K] ) < Var p(m[com] |Z[1:K\k] ) (25) where used probability distributions are marginalized distributions p(m[com] |Z) =

Z

p(X0:N , m[com] , r[k] |Z)dX0:N , r[k]

(26)

where Ω represent domain of marginalized quantities. Such criterion is however practically impossible to compute a priory and only real possibility is to evaluate it experimentally. We used this condition to classify the usage of various sensor types the overview is in Table 1 and detailed descriptions are following. Table 1. Sensor type categorization Category Low DOF Inertial Modality profile Local structure Link to ref. frame

Example Thermometer Accelerometer Camera Lidar GPS

Usage Mapping Motion model SLAM SLAM Position reference

4.2.1. Low degrees-of-freedom To this category belongs sensors which quite clearly cannot satisfy perceptible contribution condition because a number of degrees-of-freedom (DOF) of their observation range does not allow unambiguous enough localization in the observer’s state-space. Typical members of this group are scalar sensors of local environmental quantities i.e. thermometer, light-intensity sensor, etc., but also a linear lidar can be listed here while assuming that the observer is moving in 3D space with 6 DOF. Sensors from this category can be used for unique modality map creation (assuming that pose data is provided from another source), however, direct contribution to SLAM algorithms can be considered to be none (with exception of some multi-modal localization scenarios where correct mode can be chosen only by unique environmental quantity). 4.2.2. Inertial This is a category of sensors that provide data that brings links between subsequent observer state e.g. forms data for motion model. It is clear that these sensors do not fulfill the observing environmental quantity condition - they have no link to environment structure. This group consists of various encoders, accelerometers, gyroscopes, etc. These are the typical support sensors that have no direct way to contribute to the common map estimation but data from. Because historical reasons observations from these sensors are marked with symbol u rather than z. General concepts of multi-sensor data-fusion based SLAM (Jan Klečka)


70

r

ISSN: 2089-4856

4.2.3. Modality profile Sensors from this category are generally sensor that observes the properties of some ambient signal generated by the environment. From a practical perspective, these are strictly various types of cameras that measure directional characteristics of intensity of electromagnetic radiation on specific spectral interval (light). By assuming that individual parts of the obstacle surface emitting e.g. reflecting the light in such way that it is possible to identify the same spacial points in multiple images, we can use photogrammetry to reconstruct viewed structure. Characteristic property is that standard photogrammetry techniques applied on single-camera data can provide reconstruction invariant only up to unknown similarity transformation. So the scale of unknown and if needed then have to be fixed by implementing additional data into the process. Sensors of this category can be under the right conditions used for realization of SLAM as shown for example by [26] or by [27] and also can be addition to multi-sensor SLAM system. 4.2.4. Local structure This category contains the most typical sensor used in the context of SLAM algorithms. Observations provided by these sensors represent the profile of the surrounding environment from their perspective. Typical members of this group are lidars, rangefinders, and RGB-D cameras and they have the potential to be a contribution in the sense of common map estimation. 4.2.5. Link to reference frame As the designation probably suggests sensors of the last group provide direct information about position in some reference frame. It is sensors like global navigation satellite system (GNSS), local positioning systems (LPS) surveyed for example by [28], any similar beacon-based system or even a compass. From a formal perspective, these sensor does not observe any environmental property so primary they can not contribute to estimation common map, although they have a large potential to contribute indirectly as link to reference frame can eliminate any drift in pose estimation. The main problem is that these sensors may work poorly in urban areas or indoor (GNSS) or they require some special infrastructure (LPS), and so these data are rarely available. Let’s notice that a substantial part of motivation to SLAM algorithms lies in that the pose data are directly unavailable or at least unavailable in sufficient quality. 5.

CONCLUSION We presented our theoretical analysis of fundamental aspects of multiple-sensor data-fusion based SLAM problem from probabilistic approach perspective. We concluded that the most promising way to generally approaching it is by utilizing the concept of a common map as shown by presented archetype partially collective mapping. As we see it the typical nowadays published SLAM algorithm based on data-fusion is similar to super-observation archetype, but these concepts are in our opinion suboptimal in terms of robustness. Every sensor has some limitation that determines situations where it can be used. Super observation concept will safely work in situations given by the intersection of all sensors applications fields. On the contrary, the partially collective mapping archetype can work in situations given by unification of all sensors applications fields. From a practical perspective, we discussed options for common map implementation. As a mapped quantity we proposed to utilize the surface of obstacles and describing it as a piece-wise function composed of simple geometrical entities. After that, we find out three major problems that have to be solved before implementation. Firstly, the mathematical model of geometrical entities must be defined. That includes defining constraints equations, specific form of common map vector and sensors-specific remainder vectors and projection function. Secondly, some statistics posing as a segmentation criterion must be defined. And lastly, a strategy for selecting regions to test on the geometrical-entity hypothesis must be defined. We have confidence in the proposed method and our future work will be aimed at the creation of real implementation and conducting experiments that comparing its quality on publicly available datasets. ACKNOWLEDGEMENT The completion of this paper was made possible by the grant No. FEKT-S-17-4234 - ”Industry 4.0 in automation and cybernetics” financially supported by the Internal Science Fund of Brno University of Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 63 – 72


Int J Rob & Autom

ISSN: 2089-4856

r

71

Technology and by the grant No. TN01000024 by the National Competence Center-Cybernetics and Artificial Intelligence.

REFERENCES [1]

[2] [3]

[4]

[5]

[6]

[7]

[8]

[9] [10]

[11]

[12]

[13]

[14] [15]

[16]

[17] [18]

C. Cadena, et al., “Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age,” IEEE Transactions on Robotics, vol. 32, no. 6, pp. 1309–1332, Dec 2016. [Online]. Available: http://ieeexplore.ieee.org/document/7747236/ S. Huang and G. Dissanayake, “A critique of current developments in simultaneous localization and mapping,” International Journal of Advanced Robotic Systems, vol. 13, no. 5, pp. 1–13, 2016. G. Bresson, et al., “Simultaneous Localization and Mapping: A Survey of Current Trends in Autonomous Driving,” IEEE Transactions on Intelligent Vehicles, vol. 2, no. 3, pp. 194–220, Sep 2017. [Online]. Available: http://ieeexplore.ieee.org/document/8025618/ M. Beul, et al., “Fast Autonomous Flight in Warehouses for Inventory Applications,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3121–3128, Oct 2018. [Online]. Available: https://ieeexplore.ieee.org/document/8392775/ G. Klein and D. Murray, “Parallel Tracking and Mapping for Small AR Workspaces,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, IEEE, Nov 2007, pp. 225–234. [Online]. Available: http://ieeexplore.ieee.org/document/4538852/ R. A. Newcombe, et al., “KinectFusion: Real-time dense surface mapping and tracking,” in 2011 10th IEEE International Symposium on Mixed and Augmented Reality, IEEE, Oct 2011, pp. 127–136. [Online]. Available: http://ieeexplore.ieee.org/document/6162880/ J. Mccormac, et al., “Fusion++: Volumetric Object-Level SLAM,” in 2018 International Conference on 3D Vision (3DV), Verona, 2018, pp. 32–41. [Online]. Available: https://ieeexplore.ieee.org/document/8490953/ T. Whelan, et al., “ElasticFusion: Dense SLAM Without A Pose Graph,” in Robotics: Science and Systems XI. Robotics: Science and Systems Foundation, Jul 2015. [Online]. Available: http://www.roboticsproceedings.org/rss11/p01.pdf T. Whelan, et al., “Real-time large-scale dense RGB-D SLAM with volumetric fusion,” International Journal of Robotics Research, vol. 34, no. 4-5, pp. 598–626, 2015. F. Burian, P. Kocmanova, and L. Zalud, “Robot mapping with range camera, CCD cameras and thermal imagers,” in 2014 19th International Conference on Methods and Models in Automation and Robotics, MMAR 2014, Institute of Electrical and Electronics Engineers Inc., 2014, pp. 200–205. F. Fang, X. Ma, and X. Dai, “A multi-sensor fusion SLAM approach for mobile robots,” in IEEE International Conference Mechatronics and Automation, 2005, vol. 4, no. 2002. IEEE, 2006, pp. 1837–1841. [Online]. Available: http://ieeexplore.ieee.org/document/1626840/ A. Garulli, et al., “Mobile robot SLAM for line-based environment representation,” in Proceedings of the 44th IEEE Conference on Decision and Control, vol. 2005. IEEE, 2005, pp. 2041–2046. [Online]. Available: http://ieeexplore.ieee.org/document/1582461/ Y.-H. Choi, T.-K. Lee, and S.-Y. Oh, “A line feature based SLAM with low grade range sensors using geometric constraints and active exploration for mobile robot,” Autonomous Robots, vol. 24, no. 1, pp. 13–27, Jan 2008. [Online]. Available: http://link.springer.com/10.1007/s10514-007-9050-y C. Ulas and H. Temeltas, “Plane-feature based 3D outdoor SLAM with Gaussian filters,” in 2012 IEEE International Conference on Vehicular Electronics and Safety, ICVES 2012, 2012, pp. 13–18. H. Zhou, et al., “StructSLAM: Visual SLAM With Building Structure Lines,” IEEE Transactions on Vehicular Technology, vol. 64, no. 4, pp. 1364–1375, Apr 2015. [Online]. Available: http://ieeexplore.ieee.org/document/7001715/ K. Uehara, H. Saito, and K. Hara, “Line-based SLAM Considering Directional Distribution of Line Features in an Urban Environment,” in Proceedings of the 12th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, no. Visigrapp. SCITEPRESS - Science and Technology Publications, 2017, pp. 255–264. S. Yang, et al., “Pop-up SLAM: Semantic monocular plane SLAM for low-texture environments,” IEEE International Conference on Intelligent Robots and Systems, vol. 2016-Novem, pp. 1222–1229, 2016. A. Jelinek, L. Zalud, and T. Jilek, “Fast total least squares vectorization,” Journal of Real-Time Image General concepts of multi-sensor data-fusion based SLAM (Jan Klečka)


72

[19]

[20] [21]

[22] [23] [24] [25] [26]

[27]

[28]

r

ISSN: 2089-4856

Processing, pp. 1–17, Jan 2016. [Online]. Available: http://link.springer.com/10.1007/s11554-016-05626 R. Hulik, et al., “Continuous plane detection in point-cloud data based on 3D Hough Transform,” Journal of Visual Communication and Image Representation, vol. 25, no. 1, pp. 86–97, 2014. [Online]. Available: http://dx.doi.org/10.1016/j.jvcir.2013.04.001 K. Pathak, N. Vaskevicius, and A. Birk, “Uncertainty analysis for optimum plane extraction from noisy 3D range-sensor point-clouds,” Intelligent Service Robotics, vol. 3, no. 1, pp. 37–48, 2009. H. Durrant-Whyte and T. Bailey, “Simultaneous localization and mapping: part I,” IEEE Robotics & Automation Magazine, vol. 13, no. 2, pp. 99–110, 2006. [Online]. Available: http://ieeexplore.ieee.org/document/1638022/ S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. The MIT Press, 1999. J. Klecka, et al., “Non-odometry SLAM and Effect of Feature Space Parametrization on its Covariance Convergence,” in IFAC-PapersOnLine, no. 25, Brno, 2016, pp. 139–144. J. Klecka and O. Bostik, “Effects of Environment Model Parametrization on Photogrammetry Reconstruction,” in Mendel, Brno, 2018, pp. 151–158. J. Klecka, et al., “Plane segmentation and reconstruction from stereo disparity map,” in Mendel, Brno, 2016, pp. 199–204. J. Engel, T. Schöps, and D. Cremers, “LSD-SLAM: Large-Scale Direct Monocular SLAM,” in Lecture Notes in Computer Science, T. T. Fleet D., Pajdla T., Schiele B., Ed. Cham: Springer, 2014, pp. 834–849. [Online]. Available: http://link.springer.com/10.1007/978-3-319-10605-2 54 R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, Oct 2016. [Online]. Available: http://arxiv.org/abs/1610.06475 K. Curran, et al., “An evaluation of indoor location determination technologies,” Journal of Location Based Services, vol. 5, no. 2, pp. 61–78, Jun 2011. [Online]. Available: http://www.tandfonline.com/doi/abs/10.1080/17489725.2011.562927

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 63 – 72


International Journal of Robotics and Automation (IJRA) Vol. 9, No. 2, June 2020, pp. 73~83 ISSN: 2089-4856, DOI: 10.11591/ijra.v9i2.pp73-83

73

Fuzzy logic controller design for PUMA 560 robot manipulator Abdel-Azim S. Abdel-Salam, Ibrahim N. Jleta Department of Electrical and Computer Engineering, The Libyan Academy, Libya

Article Info

ABSTRACT

Article history:

The dynamic model of the robot manipulator contain from equations, these equations are nonlinear and contained from variations parameters due to variations in load, friction, and disturbance. The conventional computed torque (PD and PID) controllers are not highly suitable for nonlinear, complex, time-variant systems with delay. In this paper, the fuzzy logic controllers (FLC) has been used because it is efficient tools for control of nonlinear and uncertain parameters systems. This paper aims to design a fuzzy logic controller for position control of a PUMA 560 robot manipulator. Based on simulation results we conclude that the performance of the fuzzy logic controller in term of position tracking error in case of disturbance or load is better than the conventional computed torque (PD-CTC and PID-CTC) controllers.

Received Oct 1, 2019 Revised Jan 13, 2020 Accepted Feb 11, 2020 Keywords: Fuzzy logic controller (FLC) PID-computed torque controller (PID-CTC) PUMA 560 robot manipulator

This is an open access article under the CC BY-SA license.

Corresponding Author: Abdel-Azim S. Abdel-Salam Department of Electrical and Computer Engineering, The Libyan Academy, Tripoli, Libya. E-mail: abdualadem.saaid@academy.edu.ly

1.

INTRODUCTION The motion of a mechanical manipulator described using dynamic equations, these equations are highly non-linear and complex. Therefore, it is very difficult to implement real-time control based on a detailed dynamic model of a robot, the non-linearity of robot manipulator as centrifugal, friction and gravity make a disturbance in this case the conventional control is not effected to control of the robot manipulator. A better solution to the complex control problem might result if human intelligence and judgment replace the design approach of finding an approximation to the true robot system [1]. The fuzzy logic controller can be used to overcome this problem. This is because the performance of fuzzy logic controllers are better than a conventional controller because a fuzzy logic algorithms do not require a detailed mathematical description of the dynamic model to be controlled, and the fuzzy logic controller consider as intelligent controller especially in the non-linear system. Therefore, the implementation of the fuzzy logic controller should be computational less demanding and the position control of a Puma 560 robot manipulator can be achieved by using a fuzzy logic controller. Computed torque control (CTC) is effective with a nonlinear system and uses in robot manipulator for controlling, it is applying feedback linearization to nonlinear system for computes the torque needs for the arm, it works well when all dynamic and physical parameters are known but if the robot has a dynamic parameter variation, then the performance of the controller will not be acceptable [2]. Zadeh in 1965 presented his paper on fuzzy sets and fuzzy logic he was the first one use the theory of the fuzzy set and fuzzy logic [3], and introduce the concept of linguistic variable in 1973, Zadeh showed that fuzzy logic in contrast classical logic may take values between false and true, in classical set theory definition of membership function does not matter, but the number belongs to or does not belong to the set, yes or no and also the zero or one takes on the value, this approach is not appropriate in many life Journal homepage: http://ijra.iaescore.com


74

ISSN:2089-4856

applications like the set aged or the set of temperature, but the element in the fuzzy set contains moving values between 0 and 1, meaning that elements of these sets not only represent true or false value but also represent the degree of truth or the degree of falsity of each input [4]. The PUMA 560 robot is an abbreviation for (Programmable Universal Manipulator for Assembly) released in 1978 was the first modern industrial robot and became desperately popular, it is an industrial robot, this robot has 6 degrees of freedom with 6 rotational joints, the Puma 560 was used for research in the 1980s and it was a very common laboratory robot, we used a lot in research because it has been well studied and it is parameters well-known have been described as "white mice" in research robots [5].

2.

THE PUMA 560 ROBOT MODEL The specification of the Puma 560 robot manipulator used in this paper is from the paper of Armstrong, Khatib, and Burdick [6]. In this paper, we used the Robotics Toolbox for MATLAB created by Peter Corke in Australia to simulation and implementation of the proposed controllers for control in the position of the Puma 560 robot manipulator [7]. To defined the modified D-H parameters we used this command Puma 560 akb and description creates the robot object used this command P560m, which describes the kinematic and dynamic characteristics of an animation Puma 560 robot manipulators [8]. In this paper, the modified D-H parameters have been used. Craig [9] was the first one used the modified D-H parameters in 1986. The specifications of kinematics taking from the paper of Armstrong, Khatib, and Burdick. In the robotics toolbox, the actuators have been included in the dynamics model of a Puma 560 robot manipulator. Figure 1 shows the Puma 560 robot manipulator according to specification of Armstrong, Khatib and Burdick and used the modified D-H parameter.

Figure 1. The puma 560-AKB

3. CONTROLLER DESIGN 3.1. PD Computed torque controller The motion of the robot describe by (rigid body dynamics): τ

)

)+G(q)

where; M(q) Inertia matrix V( ) Centrifugal and Coriolis terms ) Friction term G(q) Gravity terms  One way to select the control signal u(t) is as the proportional plus derivative (PD) feedback, Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 73 – 83

(1)


Int J Rob & Autom

ISSN: 2089-4856

ukvėkpe

75

(2)

Then the full input for a Puma 560 robot arm (Figure 2) becomes, τ

kvė k 

)

) )+G(q)

(3) (4)

This controller is shown in Figure 3 with ki=0.

Figure 2. The puma 560 robot Arm with PD-CTC

Figure 3. The PD computed torque controller [7]

In this simulation we used some blocks from Peter Corke, Robotics Toolbox, as following: Jtraj: the purpose of this block is to compute a joint space trajectory between two joint coordinate poses. The command inside the block is: q qd qdd]=jtraj(q0,q1,n)

(5)

RNE: the purpose of this block is to compute inverse dynamics by Recursive Newton-Euler method. The command inside the block is: tau=rne(robot,q,qd,qdd)

(6)

Puma 560-AKB: the purpose of this block is to create a Puma 560 robot. The convention of this robot takes from Armstrong, Khatib, and Burdick. This robot used Craig's modified D-H parameter. Robot plot: the purpose of this block is graphical robot animation. The command inside the block is: plot(robot,q)

(7)

Fuzzy logic controller design for PUMA 560 robot manipulator (Abdel-Azim S. Abdel-Salam)


76

ISSN:2089-4856

3.2. PID Computed Torque Controller We noted PD computed torque control is very effective when all parameters for the arm are known and no disturbance τd, from classical control theory if disturbances are constant the PD control gives a nonzero steady-state error, for making the system type (I) we including an integrator in the feedforward loop using PID computed torque controller [10] as in Figure 4. Figure 5 shows The Puma 560 robot arm with PID-CTC. ε˙e

(8)

ukvėkpekiε

(9)

where the arms control becomes as τ

kvė k

kiε 

(10)

Figure 4. The PID computed torque controller [6]

Figure 5. The Puma 560 robot arm with PID-CTC 3.3. Fuzzy logic controller The next steps showing the method to design the fuzzy logic controller.  Define the input and output to FLC (Figure 6), there are two inputs of FLC, the error e(t) and change of error e(t) and one output is a control signal u(t) to the plant (Figures 7 and 8).  Fuzzifying the input and the output variables (Figure 9).  In the design, we chose 2 input with 7 membership function and 1 output with 7 membership function (Figure 10). Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 73 – 83


Int J Rob & Autom

ISSN: 2089-4856

77

 In the design, the membership function selected from the negative big NB to the positive big PB.  Chose the inference mechanism rule to find the relation between the input and output, in this paper we used the Mamdani inference mechanism  Defuzzifying the output variable of the fuzzy mechanism, defuzzification method was used in this paper center of gravity (COG). NB means Negative Big, NM means Negative Medium, NS means Negative Small, ZE means Zero, PS means Positive Small, PM means Positive Medium and PB means Positive Big. The rule base is in Table 1. τ

τfuzzy

(11)

Figure 6. The puma 560 robot Arm with FLC

Figure 7. Membership function of e(t)

Figure 8. Membership function of e(t) Fuzzy logic controller design for PUMA 560 robot manipulator (Abdel-Azim S. Abdel-Salam)


78

ISSN:2089-4856

Figure 9. The fuzzy logic controller subsystem

Figure 10. Membership function of output

Table 1. Rule base Output Error

NB NM NS ZE PS PM PB

NB NB NM NS NS PS PM PB

NM NM NS NS PS PM

Change of Error ZE PS NB NM NS NS NS NS ZE PS PS PS PS PM PB NS

4.

PM NM NS PS PS

PB NB NM NS PS PS PM

SIMULATION RESULTS To testing the Puma 560 robot manipulator, the joint desired input angles are θ final=[90˚, -90˚, 90˚, 45˚, 66˚, 15˚] with the initial position of the Puma 560 robot manipulator is the zero position θ initial=[0˚, 0˚, 0˚, 0˚, 0˚, 0˚].

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 73 – 83


Int J Rob & Autom

ISSN: 2089-4856

79

4.1. Without disturbance The performance of PD-CTC, PID-CTC, and FLC without disturbance are good as shown in Figures 11-13, respectively.

Figure 11. Position tracking curve of joint q5 with PD-CTC

Figure 12. Position tracking curve of joint q5 with PID-CTC

Figure 13. Position tracking curve of joint q5 with FLC

4.2. With disturbance For testing the Puma 560 with disturbance, we chose the maximum torque on each joints for computation the disturbance torque that needs for add to the model, we used the program shows in Figure 14. We take 10% from the maximum torque to get on the disturbance torque. Fuzzy logic controller design for PUMA 560 robot manipulator (Abdel-Azim S. Abdel-Salam)


80

ISSN:2089-4856

τd=[3.95 -9.87 0.56 -0.095 -0.122 0.031]. Puma 560 robot using PD-CTC, PID-CTC, and FLC are shown in Figures 15-17 respectively.

Figure 14. How to compute the torques

Figure 15. Position tracking curve of joint q5 with PD-CTC

Figure 16. Position tracking curve of joint q5 with PID-CTC

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 73 – 83


Int J Rob & Autom

ISSN: 2089-4856

81

Figure 17. Position tracking curve of joint q5 with FLC

4.3. With load For testing the Puma 560 with load, we added load to the model this load equal 6 kg using this command P560m.payload(6,[0, 0, 0.1])

To remove load we used this command P560m.payload(0)

4.4. The errors position tracking curves The positions tracking curve of Puma robot using PD-CTC, PID-CTC, and FLC are shown in Figures 18-20. The Figures 21-23 show the errors signals of the position tracking curves of the controllers PD-CTC, PID-CTC, and FLC with disturbance, respectively.

Figure 18. Position tracking curve of joint q5 with PD-CTC

Figure 19. Position tracking curve of joint q5 with PID-CTC Fuzzy logic controller design for PUMA 560 robot manipulator (Abdel-Azim S. Abdel-Salam)


82

ISSN:2089-4856

Figure 20. Position tracking curve of joint q5 with FLC

Figure 21. The errors tracking curves of PD-CTC with disturbance

Figure 22. The errors tracking curves of PID-CTC with disturbance

Figure 23. The errors tracking curves of FLC with disturbance

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 73 – 83


Int J Rob & Autom

ISSN: 2089-4856

83

5.

DISCUSSION Note from the simulation that performance of PD-CTC and PID-CTC without disturbance are good, but when add the external disturbance, we note the PD-CTC and PID-CTC are Not good because disturbance torques due to variations in load, coupling and friction that act on the joint. In this case the control problem becomes more difficult and The PD-CTC and PID-CTC are not highly suitable for Nonlinear systems therefore, we used fuzzy logic controller To overcome on this problem. When we added the disturbance torque the tracking error has increased in the PD-CTC and PID-CTC, but with FLC the tracking performance is good along the path. When we added the load we note that it is affected on the joint 5 and the tracking error has increased in joint 5 in PD-CTC and PID-CTC, but with FLC the joint 5 achieve good tracking performance along the trajectory. From the simulation, we prove that performance of the fuzzy logic controller with disturbance or load is better than another PD-CTC and PID-CTC performance with disturbance or load for controlling in the position of the Puma 560 robot manipulator in terms of position tracking error.

6.

CONCLUSION All simulation were presented using MATLAB and Robotics Toolbox (Peter Corke). From the simulation results, we conclude that performance of the fuzzy logic controller in term of position tracking error in case of existence disturbance or load is better than the performance of a computed torque controllers (PD-CTC and PID-CTC).

REFERENCES Cubero, S., “Industrial robotics: Theory, modelling and control,” Pro Literatur Verlag, 2006. Piltan, F., Yarmahmoudi, M. H., Shamsodini, M., Mazlomian, E., and Hosainpour, A., “PUMA-560 robot manipulator position computed torque control methods using Matlab/Simulink and their integration into graduate nonlinear control and Matlab courses,” International Journal of Robotics and Automation, vol. 3, no. 3, pp. 167-191, 2012. [3] Zadeh, L. A., “Fuzzy sets,” Information and control, vol. 8, no. 3, pp. 338-353, 1965. [4] Alassar, A. Z., “Modeling and control of 5 DOF robot arm using supervisory control,” in 2010 The 2nd International Conference on Computer and Automation Engineering, vol. 3, pp. 351-355, 2010. [5] Corke, P., “Robotics, Vision and control: Fundamental algorithms in MATLAB® Second Completely Revised,” Springer, vol. 118, 2017. [6] Armstrong, B., Khatib, O., and Burdick, J., “The explicit dynamic model and inertial parameters of the PUMA 560 arm,” Proceedings IEEE International Conference on Robotics and Automation., vol. 3, pp. 510-518, 1986. [7] Corke, P. I., “A computer tool for simulation and analysis: The robotics toolbox for MATLAB,” Proceedings. National Conference Australian Robot Association, pp. 319-330, 1995 [8] ‫‏‬Corke, P. I., “A. Robotics toolbox for MATLAB,” IEEE Robotics and Automation Magazine, vol. 3, no. 1, pp. 24-32, 2008. [9] Craig, J. J., “Introduction to robotics: mechanics and control,” 3nd Ed., Pearson Prentice Hall, Upper Saddle River, NJ, USA, vol. 3, 2005. [10] Lewis, F. L., Dawson, D. M., and Abdallah, C. T., “Robot manipulator control: theory and practice,” CRC Press, 2003. [1] [2]

Fuzzy logic controller design for PUMA 560 robot manipulator (Abdel-Azim S. Abdel-Salam)


International Journal of Robotics and Automation (IJRA) Vol.9, No.2, June 2020, pp. 84~93 ISSN: 2089-4856, DOI: 10.11591/ijra.v9i2.pp84-93

84

Robots for search site monitoring, suspect guarding, and evidence identification Yi-Chang Wu1, Jih-Wei Le2, Huan-Chun Wang3 1

2,3

Department Forensic Science Division, Investigation Bureau, Ministry of Justice Department of Electronic and Computer Engineer, National Taiwan University of Science and Technology

Article Info

ABSTRACT

Article history:

As an initial trial and in response to a lack of technological applications in government agencies, we have developed three multifunctional robots in accordance with the work environment and the nature of our tasks. Search site monitoring robot is fitted with a panoramic camera and large wheels for walk-around search site monitoring. Suspect guarding robot follows and guards a suspect by tracking an augmented reality marker worn by the suspect and identifying the human body through an infrared thermal camera. For the evidence identification robot, You Only Look Once (YOLO) is utilized to identify some specific evidence on search site and is equipped with a carrier and a high-torque motor for evidence transportation; it is set to issue warnings and emails to relevant personnel on specific emergencies. We have performed multiple experiments and tests to confirm the robots’ effectiveness, verifying their applicability of technological task support in government agencies.

Received Dec 27, 2019 Revised Jan 12, 2020 Accepted Mar 4, 2020 Keywords: Individual following Object identification Remote monitoring Robot ROS

This is an open access article under the CC BY-SA license.

Corresponding Author: Yi Chang Wu, Forensic Science Division, Investigation Bureau, Ministry of Justice, No. 74, Zhonghua Rd., Xindian Dist., New Taipei City 231, Taiwan (R.O.C.). Email: shintenwu@gmail.com

1.

INTRODUCTION Technological evolution has brought about convenience in our daily life with extremely rapid pace in various manners, one of which is the widespread application of robots in various fields, such as living cleaning [1], metal [2] and odor [3] detection, cultural preservation [4], and even paddy raking [5]. However, robots have rarely been employed by government agencies. Compared to civilian enterprises, government agencies are characterized by stiff and inflexible organizational structures, which have limited innovative technology application. The overall goal of this study is to promote the use of technology in government affairs, and thus three multifunctional robot, in the context of the work environment and task characteristics of the bureau, have been developed focusing on environmental monitoring, individual following, and object identification, respectively. Methods developed for remote monitoring have been diversified; in addition to basic visual feedback, numerous functions have been devised to serve the goals of development. Salh et al. employed an artificial neural network in a field-programmable analog array to control robot actions, applied feature extraction algorithm for facial identification, and implemented an MQ4 sensor coupled with a peripheral circuit to create a smart monitoring robot capable of detecting flammable gas [6]. With the goal of applying a network-based robot system in remote monitoring, Sundaram et al. employed a standard communication protocol and a human–machine interface to directly control the robot architecture through a network and Journal homepage: http://ijra.iaescore.com


Int J Rob & Autom

ISSN: 2089-4856

85

acquire visual feedback [7]. Chirag et al. devised a robot with a video camera, a global positioning system, and a sensor installed for live streaming, voice control, and snapshots; the data and images acquired were stored on cloud servers for registered users to view [8]. Bokade et al. developed an Android-based application, which featured an MJPG streamer window for video streaming and buttons for controlling robots and cameras; a Raspberry Pi board was applied to control robots through commands [9]. In response to the rising awareness of safety issues worldwide, Rashid et al. designed a Raspberry Pi-based mobile monitoring system for live streaming and voice control; the system enabled remote monitoring system control through dual-tone multifrequency(DTMF) control using a network interface or a mobile phone keyboard [10]. Individual following requires locking onto a specifically targeted individual person without confusing that person with other individuals. An unmanned ground vehicle can be equipped with a video camera or other types of sensors to detect and track individuals within its range of sight [11]. Most studies have focused on refining the existing technology or proposing solutions to difficulties encountered in practical applications, and most have employed sensors for obtaining distance information. Wang et al. used an extended Kalman filter along with data collected by camera and a supersonic wave sensor to develop a real-time three-dimensional (3D) individual-tracking system; they attempted to overcome its practical problems (such as occlusion) and to improve the scale precision of 3D data [12]. To solve the disadvantages of robots that track users from the back, Nikdel et al. developed one that follows its user from the front; an extended Kalman filter, camera with different fields of view and a laser rangefinder were employed to estimate the user’s relative position and speed, and a preestablished occupancy grid map was implemented to detect the target and predict its actions and trajectory of movement [13]. Using the A* path planning algorithm and data obtained through light detection and ranging (lidar) and gyroscopes, Huskić et al. developed a tracking robot that can follow targets while moving across various types of terrain and dynamic environments at high speeds [14]. Chen et al. improved tracking quality through an consensus of corresponding methods and image preprocessing technology; they employed supervised learning to update features for redirecting processes for redetection or complex backgrounds, developing FOLO, a two-dimensional appearance-based tracking robot [15]. Chen et al. employed two methods to develop automatic tracking robots [16, 17]. Selected online ada-boosting(SOAB) was integrated with depth information to improve upon the inability of the online ada-boosting(OAB) algorithm to maintain a fixed target size in a changing environment [16]. The RGB channel and computed stereo depth image (called RGB-stereo depth, RGB-SD) were entered into a convolutional neural network (CNN) to output required information, and a proportional-integral-derivative (PID) controller was employed to control the robot in target tracking; the latest action and posture of the target individual were used to calculate and predict its path in response to its temporary disappearance from the sight [17]. To improve upon the existing visual-based human-tracking method, Gupta et al. employed Speeded Up Robust Features (SURFs) for target tracking; a k-dimensional tree (K-D tree) was jointly exploited with a Kalman filter for data classification to detect changes in posture, and a servo controller was applied to command the robot to follow the target [18]. Object identification is the most developed field in deep learning; it enables a system to establish and train a model according to various needs and thus is applied for diverse purposes. The Convolutional Neural Network (CNN) paradigm was created to improve upon the ability of deep neural networks to process only one-dimensional data [19] and is one of the most frequently applied technology paradigms for image feature extraction. Several CNN-based models have been established, such as Regions with CNN (R-CNN), Region Proposal Network (RPN), and You Only Look Once (YOLO) [20]. Yu et al. applied object identification technology in an existing advertisement system to improve outdoor advertising efficacy. They proposed an audience-oriented targeted advertising system integrated with biostatistics and machine learning; Microsoft Face application programming interface (API) was used to identify the sex and age of an individual, and a Single Shot MultiBox Detector (SSD) was employed to attain identification of multiple objects including vehicles, and the identification results were then used to determine the types of advertisements to broadcast [21]. In response to the problems associated with increasing traffic and its dynamic nature, Iyer et al. exploited a SSD to develop a traffic signal system able to adjust in real time. The types and number of vehicles at each intersection were detected and counted to calculate the duration of a follow-up green light session and the time until the next green light session. Moreover, after each cycle, in which all intersections had undergone a green light session, the signal system automatically adjusts itself according to the current traffic situations for considerably higher efficiency than conventional signal systems with fixed-time traffic signals [22]. Hardware of the three multifunctional robots proposed in this study is all based on TurtleBot3 and is intended for search site monitoring, suspect guarding, and evidence identification, respectively. The site-monitoring robot is equipped with a panoramic camera and the large wheels enable its high mobility to walk around and monitor the search sites through live streaming; thus users are able to view the site Robots for Search Site Monitoring, Suspect Guarding, and Evidence Identification (Yi-Chang Wu)


86

ISSN:2089-4856

remotely through a corresponding software application. We explicated an automatic tracking technology to have the suspect-guarding robot to follow a suspect to be interrogated; a laser rangefinder and an augmented reality (AR) marker are employed to follow the suspect, and a thermal camera is applied to identify the suspect. As for the object identification robot, YOLO was applied to identify evidence at the search site and is fitted with a carrier for transporting the evidence found; it also issues warnings when anomalies occur.

2.

RESEARCH METHOD This study developed several robotic technologies independently using the Robot Operating System (ROS) [23]. The ROS provides most of the functions of traditional operating systems such as hardware layer abstraction, low-level equipment control, interprocess message transmission, and package management. Additionally, relevant tools and procedural libraries are provided that can be used to acquire, compile, and edit code and achieve distributed computing. The ROS standard package provides various stable and adjustable robot algorithms. The standardized ROS communication interface means that developers can devote more time on design and actualization of new ideas and computations, thereby avoiding repetition of existing research outcomes. Modern robots usually require multiple computers to calculate the numerous processes they conduct. Thus, a robot can be equipped with several computers, with each computer powering a part of the robot’s transducer and driver. Alternatively, users can send control commands to a robot through their computers, such as a tablet or smartphone. This type of human-machine interactive interface can be considered as part of a distributed system. Therefore, the ROS can help resolve communication problems that arise between different processes when several computers are part of a distributed system. Based on the ROS, we developed functions such as remote monitoring, automatic individual following and object identification; the design and implementation of each function was as follows. 2.1. Autonomous smart navigation 2.1.1. Mapping High-precision Lidar (Figure 1) was used to construct a customized map (Figure 2) of the building using the gmapping algorithm [24]. The Rao-Blackwellized particle filter was used with the gmapping algorithm to achieve simultaneous localization and mapping (SLAM). A study [25] indicated that gmapping has high stability and excellent performance in terms of the error rate and CPU load.

Figure 1. Lidar unit

Figure 2. Customized map of the building

2.1.2. Positioning Taking the data from the Lidar and an inertial measurement unit (Figure 3), the adaptive Monte Carlo localization (AMCL) algorithm [26] was adopted to achieve positioning (Figure 4). The customized map was used with the algorithm to dynamically construct probability distributions of particles. Then, the Lidar-measured values were used to adjust the probability distributions until the positioning results converged.

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 84 – 93


Int J Rob & Autom

ISSN: 2089-4856

87

Figure 3. Inertial measurement unit Figure 4. AMCL positioning

2.1.3. Route planning and following The probabilistic roadmap (PRM) algorithm [27] was used for route planning by constructing connections between nodes that were subsequently used to locate obstacle-free routes between the starting and finishing point (Figure 5). The PurePursuit algorithm [28] was used to execute the planned route, and look-ahead points were adjusted to ensure the route was smoothly and correctly taken (Figure 6).

Figure 6. PurePursuit route execution

Figure 5. PRM route planning

2.1.4. Dynamic environment detection and obstacle avoidance The vector field histogram(VFH+) algorithm [29, 30] was adopted for dynamic environment detection and obstacle avoidance (Figure 6). This algorithm used the data received from the sonar (Figure 7) and Lidar (Figure 1) to construct the polar histogram of obstacles. Subsequently, the histogram thresholds and minimum turning radium were used to determine the required route for obstacle avoidance (Figure 8).

Robots for Search Site Monitoring, Suspect Guarding, and Evidence Identification (Yi-Chang Wu)


88

ISSN:2089-4856

Figure 7. Sonar elements

Figure 8. Dynamic environment detection and obstacle avoidance

2.2. Remote human-machine control interface The representational state transfer (RESTful) API [31, 32] (Figure 9) not only enabled us to operate intelligent machines on websites, applications, and mobile devices, but sent images from its visual system to users (Figure 10). a. The RESTful API comprises three elements [33]: - A URL for the web service (e.g., http://example.com/resources/). - A data-interchange format that is accepted and returned by the web service (e.g., JSON). - RESTful methods for making requests that are supported by the web service (e.g., POST, GET, PUT, or DELETE). b. The RESTful API uses HTTP as the underlying protocol [32, 34]. Compared with conventional web services, RESTful is lightweight with both client and server sides. On the client side, HTTP is used to request resources from the server side. The server side is responsible for processing requests and allocating resources. HTTP operation that can be used on websites, applications, and mobile devices enables quick and simple operation of smart machines using a visual interface.

Figure 9. RESTful API framework

Figure 10. Human-machine remote control interface (mobile devices such as tablets, smart phones…etc.)

2.3. Mobile search site monitoring The search site monitoring robot, equipped with a panoramic camera and large wheels, monitors search sites while navigating by random walk methods. The GV-VR360 panoramic camera (Figure 11) provides an all-around perspective and supports multiple functions such as live streaming. In practice, a single search mission can be conducted at multiple locations. By default, the search site monitoring robot may be applied outdoors or at sites without wireless networks, and in scenarios where navigation maps cannot be illustrated in advance. We replaced the small wheels in TurtleBot with larger ones and thus the robot can adapt to various road surfaces and terrain types; a 4G network card and a gateway are installed for a virtual private network (VPN) connection, enabling remote control of the robot while the robot moves automatically. Moreover, the robot is connected to a self-established cloud system to provide real-time video monitoring so that users are able to monitor the search status at each location and direct or adjust the search mission at any time. Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 84 – 93


Int J Rob & Autom

ISSN: 2089-4856

89

2.4. Individual-following using an AR marker An AR marker is employed to follow an individual. The Automatic Parking Vision, one of the applications in TurtleBot3, is originally designed to achieve automatic parking by tracking AR markers through a Raspberry Pi camera [35]. Because of the simple environment at the bureau and the unique purpose of our application, we had an AR marker worn by the suspect for the robot to track. In addition, to prevent the suspect from taking off the marker autonomously, an infrared thermal camera (Figure 12) is exploited to help the robot to identify the human body through thermography. A laser rangefinder is used to measure the distance between the suspect and the robot, and Random Forest are employed to assess the location of the suspect. If the suspect leaves the range of the robot’s vision for 3 seconds, the monitoring platform will issue a warning to alert users to respond immediately to the emergency, thereby achieving suspect guarding.

Figure 11. GV-VR360 IP video camera

Figure 12. Optris PI 230 infrared thermal camera

2.5. Object identification and warning A Horned Sungem vision kit, a Raspberry Pi camera, and YOLO are applied for object identification. YOLO, which converts the task of object identification to a regression problem and integrates the operational procedure in a single neural network, requires relatively little calculation, is easy to train, and is fast [20, 36]. Lidar and a high-torque motor are implemented for the robot to identify specific objects relevant to the focal points of the search mission; a carrier is installed for transporting the evidence found. The objects can be identified in our study now include individuals, computers, screens, keyboards, mice, backpacks, handbags, and suitcases. A corresponding monitoring platform is also installed; when a single specific object of interest is identified, the platform issues a warning, and the warning light turns red. When multiple objects of interest are detected (e.g., individuals and doors), a runaway alert is issued; when individuals and the aforementioned objects of interest are detected at the same time, an evidence destruction warning is issued; when multiple individuals are detected, a collusion warning is issued, and the warning light turns red. 3.

RESULTS The three multifunctional robots developed to assist the tasks of the bureau are depicted as follows.

3.1. Search site monitoring robot Search site monitoring robot consists of a GV-VR360 panoramic camera along with other peripheral devices (Figure 13). This robot is capable of walk-around monitoring with a large range (Figure 14) and provides live streaming through a cloud system (Figure 15) where the videos captured by the robot are stored in the cloud for playback and examination afterwards. An application (Figure 16) is developed as well for users to monitor the search mission at all time.

Figure 13. Search site monitoring robot

Figure 14. Practical use of the search site monitoring robot

Robots for Search Site Monitoring, Suspect Guarding, and Evidence Identification (Yi-Chang Wu)


90

ISSN:2089-4856

Figure 15. Live streaming

Figure 16. Interface of the application

3.2. Suspect guarding robot Suspect guarding robot consists of a laser rangefinder, an infrared thermal camera, a Raspberry Pi camera, and other peripheral devices (Figure 17). In our design, an AR marker was placed on the suspect’s foot for the robot to track (Figure 18). Figure 19 illustrates the interface of the robot’s operational platform. On the left the following path of the robot is showed as the blue curve where the blank region indicates the amount of idle time. Table 1 lists the rate of the robot’s successful tracking of the AR marker for each set of distances (50 tests per set). The robot was set to stop when it was within 30 cm of the suspect; when the target was lost for more than 3 s, the robot issued a warning and sent emails to the personnel.

Figure 17. Suspect guarding robot

Figure 18. Practical use of the suspect guarding robot

Figure 19. Operational interface of the suspect guarding robot Table 1. Test results of the suspect guarding robot at different sets of distances from the suspect Distance (m) <0.3 0.3-0.6 0.6-0.9 0.9-1 1-1.1

Number of tests 50 50 50 50 50

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 84 – 93

Number of successful tests 50 39 38 38 33

Success rate (%) 100 78 76 76 66


Int J Rob & Autom

ISSN: 2089-4856

>1.1

50

23

91

46

3.3. Evidence identification robot Evidence identification robot comprises a Horned Sungem vision kit, a Raspberry Pi camera, lidar, and other peripheral devices (Figure 20). Figure 21 depicts the operational platform of the robot. The map on the left shows the robot’s following path, and the lights on the right indicate the objects and actions identified. Table 2 lists the robot’s identification results conducted with different samples of each type of objects including both actual entities and images.

Figure 20. Evidence identification robot

Figure 21. Operational interface of the evidence identification robot

Table 2. Identification results on each type of objects Object Person Monitor Laptop Mouse Keyboard Backpack Handbag Suitcase

Number of times of verification 50 50 50 50 50 50 50 50

Number of times of successful verification 39 25 22 17 15 25 21 19

Number of times of failed verification 11 25 28 33 45 25 29 31

Success rate (%) 78 50 44 34 30 50 42 38

4.

CONCLUSION In this study three key tasks were selected as the purposes of the three multifunctional robots, and experiments and adjustments were conducted to verify the robots’ feasibility and practicality. Because government action can infringe on citizens’ rights, the designs of technological applications must emphasize data protection in addition to efficacy and efficiency. Nevertheless, because of the relatively simple environments and routine missions of government agencies, appropriate technological task support is desirable. In the future, self-developed training models should be adopted to improve the robots’ object identification efficacy, and the promotion for the technological applications in government agencies should continue.

REFERENCES [1] [2] [3] [4]

[5] [6] [7]

J. Lee, et al., “Autonomous multi-function floor cleaning robot with zig zag algorithm,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 15, no. 3, pp. 1653-1663, 2019. N. S. Ali, et al., “Multi-function intelligent robotic in metals detection applications,” TELKOMNIKA Telecommunication Computing Electronics and Control, vol. 17, no. 4, pp. 2058-2069, Aug 2019. H. Widyantara, et al., “Wind direction sensor based on thermal anemometer for olfactory mobile robot,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 13, no. 2, pp. 475-484, Feb 2019. G. Indrawan, et al., “LBtrans-Bot: A Latin-to-Balinese Script Transliteration Robotic System based on Noto Sans Balinese Font,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 12, no. 3, pp. 12471256, Dec 2018. E. Abana, et al., “Rakebot: a robotic rake for mixing paddy in sun drying,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 14, no. 3, pp. 1165-1170, Jun 2019. T. A. Salh and M. Z. Nayef, “Intelligent surveillance robot,” in 2013 International Conference on Electrical Communication, Computer, Power, and Control Engineering (ICECCPCE), Mosul, 2013, pp. 113-118. A. Sundaram, et al., “Remote Surveillance Robot System -- A Robust Framework Using Cloud,” in 2015 IEEE

Robots for Search Site Monitoring, Suspect Guarding, and Evidence Identification (Yi-Chang Wu)


92 [8]

[9]

[10]

[11] [12] [13]

[14]

[15]

[16] [17] [18] [19]

[20] [21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36]

ISSN:2089-4856

International Symposium on Nanoelectronic and Information Systems, Indore, 2015, pp. 213-218. B. Chirag, A. E. Manjunath, and K. B. Badrinath, “An intelligent cloud based cost effective surveillance robot,” in 2014 2nd International Conference on Emerging Technology Trends in Electronics, Communication and Networking, Surat, 2014, pp. 1-4. U. Bokade and V. R. Ratnaparkhe, “Video surveillance robot control using smartphone and Raspberry pi,” in 2016 International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, 2016, pp. 20942097. M. T. Rashid, P. Chowdhury and M. K. Rhaman, “Espionage: A voice guided surveillance robot with DTMF control and web based control,” in 2015 18th International Conference on Computer and Information Technology (ICCIT), Dhaka, 2015, pp. 419-422. M. J. Islam, et al., “Person Following by Autonomous Robots: A Categorical Overview,” arXiv Prepr. arXiv1803.08202v1, 2018. M. Wang, et al., “Accurate and Real-Time 3-D Tracking for the Following Robots by Fusing Vision and Ultrasonar Information,” IEEE/ASME Transactions on Mechatronics, vol. 23, no. 3, pp. 997-1006, June 2018. P. Nikdel, et al., “The Hands-Free Push-Cart: Autonomous Following in Front by Predicting User Trajectory Around Obstacles,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, QLD, 2018, pp. 4548-4554. G. Huskić, et al., “Outdoor person following at higher speeds using a skid-steered mobile robot,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, 2017, pp. 34333438. E. Chen, ““FOLO”: A Vision-Based Human-Following Robot,” in Proceedings of the 2018 3rd International Conference on Automation, Mechanical Control and Computational Engineering (AMCCE 2018), Dalian, May 2018, pp. 224-232. B. X. Chen, et al., “Person Following Robot Using Selected Online Ada-Boosting with Stereo Camera,” in 2017 14th Conference on Computer and Robot Vision (CRV), Edmonton, 2017, pp. 48-55. Chen B.X., et al., “Integrating Stereo Vision with a CNN Tracker for a Person-Following Robot,” in International Conference on Computer Vision ICVS 2017, 2017, pp. 300-313. M. Gupta, et al., “A Novel Vision-Based Tracking Algorithm for a Human-Following Mobile Robot,” in IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 7, pp. 1415-1427, July 2017. A Preliminary Study of Convolutional Neural Networks. Retrieved from https://chtseng.wordpress.com/2017/09/ 12/%E5%88%9D%E6%8E%A2%E5%8D%B7%E7%A9%8D%E7%A5%9E%E7%B6%93%E7%B6%B2%E8% B7%AF/ (August 9, 2019) Object Detection. Retrieved from https://mc.ai/%E7%89%A9%E9%AB%94%E5%81%B5%E6%B8%AC-objectdetection/ (August 7, 2019) T. J. Yu, et al., “AI-Based Targeted Advertising System,” Indonesian Journal of Electrical Engineering and Computer Science, vol. 13, no. 2, pp. 787-793, Feb 2019. P. R. Iyer, et al., “Adaptive real time traffic prediction using deep neural networks,” IAES International Journal of Artificial Intelligence (IJ-AI), vol. 8, no. 2, pp. 107-119, June 2019. Wiki. (2019). ROS. Retrieved from http://wiki.ros.org/ROS/Introduction (May 17, 2018). Giorgio Grisetti, et al., “Improved techniques for grid mapping with rao-blackwellized particle filters,” IEEE Transactions on Robotics, vol. 23, no. 1, pp. 34-46, Feb 2007. J. M. Santos, et al., “An evaluation of 2D SLAM techniques available in Robot Operating System,” in 2013 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 2013, pp. 1-6. D. Fox, “Kld-sampling: Adaptive particle filters,” in Advances in Neural Information Processing Systems, 2001. Kavraki, et al., “Probabilistic roadmaps for path planning in high-dimensional configuration spaces,” IEEE Transactions on Robotics and Automation, vol. 12, no. 4, pp. 566-580, Aug 1996. R. C. Coulter, “Implementation of the Pure Pursuit Path Tracking Algorithm,” Carnegie Mellon University, Pittsburgh, CMU-RI-TR-92-01, Jan 1990. J. Borenstein and Y. Koren, “The Vector Field Histogram - Fast Obstacle Avoidance for Mobile Robots,” IEEE Journal of Robotics and Automation, vol. 7, no. 3, pp. 278–88, 1991. I. Ulrich and J. Borenstein, “VFH: Reliable Obstacle Avoidance for Fast Mobile Robots,” in Proceedings 1998 IEEE International Conference on Robotics and Automation, 1998, p. 1572-1577. C. Pautasso, “RESTful Web service composition with BPEL for REST,” Data & Knowledge Engineering, vol. 68, no. 9, pp. 851-866, September 2009. P. Belimpasakis and S. Moloney, “A Platform for Proving Family Oriented RESTful Services Hosted at Home,” IEEE Transactions on Consumer Electronics, vol. 55, no. 2, pp. 690-698, May 2009. Rob Miracle, Tutorial: Connecting to RESTful API services, https://coronalabs.com/blog/2015/06/02/tutorialconnecting-to-restful-api-services/ , June 2015 C. Pautasso and E. Wilde, “Push-Enabling RESTful Business Processes,” International Conference on ServiceOriented Computing, vol. 7084, pp. 32-46, 2011. TB3 Automatic Parking Vision. Retrieved from https://docs.idminer.com.tw/part-1-turtlebot3/17.-applications-butong-ying-yong/17.4.-tb3-automatic-parking-vision-zi-dong-ting-che-shi-yong-nie-ying-ji Faster RCNN/YOLO/SSD comparison of algorithms. Retrieved from https://www.twblogs.net/a/5c781e01bd9eee31cea58559 (August 7, 2019)

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 84 – 93


Int J Rob & Autom

ISSN: 2089-4856

93

BIOGRAPHIES OF AUTHORS Yi-Chang Wu works for Investigation Bureau, Ministry of Justice. He is also a Ph.D. candidate of Electronic and Computer Engineering at National Taiwan University of Science and Technology.

Jih-Wei Lee received the B.A. in Electronic Engineering from National Taiwan University of Science and Technology, Taiwan, in 2012. Since 2013, he has been Ph.D. candidate with the Department of Electronics Engineering of National Taiwan University of Science and Technology, Taiwan. His current research interests include 5G systems, iterative detection and MIMO systems.

Huan-Chun Wang received the MS and PhD degrees in Electrical Engineering from National Chung Cheng University, Taiwan, in 1994 and 1999, respectively. During 2000-2004, he stayed in ITRI, where he was mainly involved in the project of WCDMA. Since 2004, he has been with the Department of Electronic Engineering of National Taiwan University of Science and Technology, Taiwan. His current research interests include iterative detection and MIMO systems.

Robots for Search Site Monitoring, Suspect Guarding, and Evidence Identification (Yi-Chang Wu)


International Journal of Robotics and Automation (IJRA) Vol.9, No.2, June 2020, pp. 94~112 ISSN: 2089-4856, DOI: 10.11591/ijra.v9i2.pp94-112

94

Particle swarm optimization algorithms with selective differential evolution for AUV path planning Hui Sheng Lim1, Shuangshuang Fan2, Christopher K.H. Chin3, Shuhong Chai4, Neil Bose5 1,2,3,4,5

National Centre for Maritime Engineering and Hydrodynamics, Australian Maritime College, University of Tasmania, Australia 5 Department of Ocean and Naval Architectural Engineering, Memorial University of Newfoundland, Canada

Article Info

ABSTRACT

Article history:

Particle swarm optimization (PSO)-based algorithms are suitable for path planning of the Autonomous Underwater Vehicle (AUV) due to their high computational efficiency. However, such algorithms may produce sub-optimal paths or require higher computational load to produce an optimal path. This paper proposed a new approach that improves the ability of PSO-based algorithms to search for the optimal path while maintaining a low computational requirement. By hybridizing with differential evolution (DE), the proposed algorithms carry out the DE operator selectively to improve the search ability. The algorithms were applied in an offline AUV path planner to generate a near-optimal path that safely guides the AUV through an environment with a priori known obstacles and time-invariant non-uniform currents. The algorithm performances were benchmarked against other algorithms in an offline path planner because if the proposed algorithms can provide better computational efficiency to demonstrate the minimum capability of a path planner, then they will outperform the tested algorithms in a realistic scenario. Through Monte Carlo simulations and Kruskal-Wallis test, SDEAPSO (selective DE-hybridized PSO with adaptive factor) and SDEQPSO (selective DE-hybridized Quantum-behaved PSO) were found to be capable of generating feasible AUV path with higher efficiency than other algorithms tested, as indicated by their lower computational requirement and excellent path quality.

Received Jan 13, 2020 Revised Feb 17, 2020 Accepted Mar 4, 2020 Keywords: Autonomous underwater vehicle Hybridization Monte Carlo methods Particle swarm optimization Path planning

This is an open access article under the CC BY-SA license.

Corresponding Author: Hui Sheng Lim, National Centre for Maritime Engineering and Hydrodynamics, Australian Maritime College, University of Tasmania, Launceston, TAS, 7250, Australia. Email: hui.lim@utas.edu.au

1.

INTRODUCTION AUVs are unmanned underwater vehicles that can be remotely programmed to conduct various missions, ranging from seabed survey, coastal mapping and environmental monitoring for scientific research purposes, to anti-submarine warfare for defence purposes. To date, numerous efforts have been made in the attempt to enable the operation of AUVs in more dynamic and constrained environments, such as shallow coastal areas, deep ocean regions and regions underneath ice shelves. The operation of AUVs in highly dynamic regions is challenging and it poses several technical issues, particularly for the path planning of the AUVs. Planning the path for an AUV is essentially a multimodal optimization problem; numerous optimization techniques have been proposed to solve this problem effectively and efficiently. Nonetheless, developing the algorithms for AUV path planning still faces several intrinsic difficulties, particularly in Journal homepage: http://ijra.iaescore.com


Int J Rob & Autom

ISSN: 2089-4856

95

balancing the computational requirements and the performance of the path planner. The high computational requirements for planning the path in a realistic 3D environment may lead to excessive energy drain in an AUV. A common way to keep the computational requirements of path planner feasible is to reduce the problem to a 2D space [1]. This however compromises the performance of the path planner due to reduced amount of 3D information available for the path planner, such as currents field, bathymetry and obstacles in the ocean environment. Thus, a high computational-efficient algorithm is required for effective AUV path planning in realistic ocean environment. Recently, Zeng, Sammut [2], and Youakim and Ridao [3] compared and classified various path planning techniques including Artificial Potential Field APF, search-based methods, sampling-based methods and optimization methods. The APF method [4] is fast and efficient, but very susceptible to local minima. Search heuristic-based planners such as Field D* [5] and Fast Marching* (FM*) [6] are capable of generating optimal and robust paths, but their computational efficiencies are limited to less complex and lower dimensional problems. Sampling-based methods such as Rapidly-exploring Random Trees RRT [7] and its variants RRT* [8] are effective for high-dimensional and highly time-constraint scenarios at the cost of the path optimality, and the resultant paths often require further refinement. Meta-heuristic optimization methods such as the evolutionary algorithms [9, 10] show excellent performance in terms of solution optimality. Evolutionary algorithms are effective for high-dimensional complex problems but they may converge to local minima within finite time. Among the existing evolutionary algorithms, Zeng, Sammut [2] further pointed out that the particle swarm optimization (PSO)-based algorithms are remarkably robust and efficient for solving high-dimensional path planning problems. PSO algorithm and its most significant variant, the quantum-behaved PSO (QPSO) are extensively used in various optimization problems ever since their emergence in 1995 and 2004 respectively due to their fine search abilities and easy implementations [11]. Some pioneering examples of their applications in path planning can be found in [12-14]. PSO-based path planners are suitable for dynamic environments where online path planning is required because they maintain a large pool of solutions, which is available at any time during the mission. These solutions can serve as the initial solutions whenever path replanning is required, thus significantly improving the computational efficiency. Some successful applications of PSO-based algorithm in online path planning of AUV can be found in [15, 16]. Nonetheless, PSO-based algorithms are susceptible to convergence at local minimum solutions if the time allowed for path planning is limited, which is often the case in real AUV operations. In recent years, many strategies that modified the PSO and QPSO algorithms have been proposed in order to improve their performances in path planning of various autonomous systems. Each of these variants of the algorithms claimed to have different improvements over the original PSO and QPSO algorithms. To benchmark the PSO and QPSO variants in the application of AUV path planning, a recent comparison study [17] classified and evaluated the algorithms based on their solution qualities, stabilities and computational efficiency. It was concluded from the results of [17] that the hybridization of differential evolution (DE) in PSO and QPSO, which were proposed by [18], are able to produce path planning solution with the highest quality due to their stronger resistance to local minima, but at the cost of higher computational requirements. Moreover, the findings of [17] suggested that having an adaptive mechanism in the evolution of particles in the PSO algorithm can produce solution quality that is second only to DE-hybridized algorithms, but with a relatively low computational requirement; the adaptive PSO (APSO) proposed by [19] was able to generate a path planning solution that achieves a balance between solution quality and computational requirements. Inspired by the DE hybridization, a number of algorithms, namely SDEPSO (PSO with selective DE hybridization), SDEAPSO (PSO with adaptive factor and selective DE hybridization), and SDEQPSO (QPSO with selective DE hybridization), are proposed in this paper. These algorithms explore the strengths of DE-hybridized algorithms, and minimize their weaknesses in order to improve the algorithm performance. The proposed algorithms were implemented in an offline AUV path planner and their performance were benchmarked against other meta-heuristic algorithms because if the proposed algorithms can provide better computational efficiency to demonstrate the minimum capability of a path planner, then they will outperform the tested algorithms in a realistic online path planner. The objective of the AUV path planner is defined as finding a near-optimal path that safely guides the AUV from a starting position to a destination based on a minimum time criterion. The path planning scenario with a priori known obstacles and non-uniform current field was first simulated in a 2-dimensional (2D) domain, followed by the simulation in a 3-dimensional (3D) domain. Extensive Monte Carlo simulations were conducted on all algorithms and the simulation results were analysed based on their respective solution qualities and stabilities. The rest of this paper is arranged as follows. In Section 2, the overview of the basic PSO, QPSO and their variants, including DEPSO, DEQPSO and APSO are provided. Section 3 describes the novel algorithms proposed in this paper. The formulation of the path planning problem is outlined in Section 4. Section 5 presents the simulation setup, results and discussions. The generated path solutions were then validated using Particle swarm optimization algorithms with selective differential evolution... (Hui Sheng Lim)


96

ISSN:2089-4856

an AUV simulator of REMUS 100 in Section 6. Finally, Section 7 concludes the paper along with the future research directions.

2.

REVIEW ON PSO AND ITS VARIANTS This section presents the overview of various particle swarm intelligence based algorithms used for developing the novel algorithms, which include the basic PSO, basic QPSO and their variants. 2.1. PSO algorithm Introduced by Eberhart and Kennedy [20], PSO algorithm is a heuristic population-based optimization algorithm inspired by the analogues of cognitive abilities and social interaction in animals. The algorithm consists of particles that move within a multidimensional search space to find the potential solutions, which are represented by the particles‟ positions. The particles‟ velocities are iteratively updated by the particle‟s own experience (cognitive behaviour) and the entire swarm‟s experience (social behaviour) to vary the particles‟ positions. In a standard PSO algorithm that consists of N particles with D number of dimensions for solving a cost evaluation function f, the position vector of the ith particle at tth iteration can be denoted as: 0

*

1

+

(1)

Based on its previous best position pbest and global best position in the swarm gbest, the velocity V and the position X of the ith particle at (t+1)th iteration are updated by (2) and (3) respectively. pbest and gbest are determined based on the particle‟s fitness f(X) and its previous best fitness f(pbest) as shown in (4) and (5). (

)

(

)

(2) (3)

( (

{ , (

) )

( (

) )

(4)

)-

(5)

In (2), r1 and r2 are uniform distributed random positive numbers that are less than 1.0. C1 and C2 denote the acceleration coefficients for cognitive and social components respectively; they are both set to 2.0 for most applications [21]. Parameter w is the inertia weight introduced by [22] for balancing the global exploration and local exploitation of the particles. A common strategy is to set the inertia weight at an initial wmax value of 0.9, and linearly decreasing to a wmin value of 0.4 according to (6) as the iteration progresses [23, 24]. (

)

()

where tmax is the maximum number of iterations defined for the algorithm. To confine the particles within the search space, the particle velocity denoted by V is usually bound to an interval of [-Vmax, Vmax], where the maximum velocity Vmax is recommended to be 10% to 20% of the dynamic range of the variables [24, 25]. 2.2. QPSO algorithm Inspired by the mechanics of quantum system and dynamical analysis of the PSO algorithm, Sun, Feng [26] proposed the QPSO algorithm. In QPSO, the position of the ith particle can be updated using the following stochastic equation: ( (

{

) )

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 94 – 112

| |

| |

( ⁄ ) ( ⁄ )

(7)

(8)


Int J Rob & Autom

ISSN: 2089-4856

97

where u is a uniform distributed random positive number that is less than 1.0,  is a uniform distributed random positive number that is less than 1.0 , and mbest is the mean best position which is defined as the average of personal best positions of all particles in the swarm as shown in (8).  is known as the contraction-expansion (CE) coefficient, which is the most critical parameter for tuning the convergence behaviour of QPSO. As suggested by the empirical study of parameter selection in [11], a linearly decreasing  from a maximum value max of 1.0 to a minimum value min of 0.5 according to (9) is suitable for most applications. (

)

(9)

2.3. DEPSO and DEQPSO algorithms One of the most effective method used for improving the PSO-based algorithm is by hybridization, in which the beneficial features of other optimization techniques is combined with PSO or QPSO algorithm. In [27], the basic PSO was combined with Differential Evolution (DE), resulting in a hybrid algorithm known as DEPSO. Based on the inspiration from DEPSO, [18] applied a similar hybridization concept in QPSO to propose DEQPSO. In both DEPSO and DEQPSO, the particles undergo the usual position update operations, followed by a successive three-step DE operation, which is the mutation, crossover and selection as described below. - Mutation: A mutated donor vector U is first generated using (10): (

-

)

(

)

(10)

where r1, r2, r3 and r4 are randomly selected particle indices that are mutually different, and different from the current index i and the particle index of global best position, i.e. r1  r2  r3  r4  i  gbest. Crossover: A trial vector T is generated to increase the diversity, by conducting crossover between the donor vector and personal best position as shown in (11). [

] (11)

{

where CR is the crossover probability which is suggested to be 0.85, rj is a uniform distributed random number ranging from 0 to 1.0, and r is a random positive integer ranging from 1 to the number of particle dimensions D. - Selection: A greedy selection is used to decide whether the trial vector T should replace the current position X in the (t+1)th iteration. The fitness of T will be evaluated and compared with X. X will only be replaced if T has better fitness value; otherwise X will be retained. This means the hybridization of the DE operation will never deteriorate the solution, but only make it better or remain unchanged. DEPSO and DEQPSO algorithms were applied to solve the path planning problem of Unmanned Aerial Vehicle (UAV) in [18], and has proven to be capable of generating significantly higher solution quality than basic PSO and QPSO algorithms. 2.4. APSO algorithm In basic PSO, the acceleration coefficients C1 and C2, and inertia weight w in the update equation are important for maintaining the balance between the global exploration and local exploitation of the particles. Zhan, Zhang [19] proposed an adaptive PSO (APSO), in which an evolutionary factor is used as an indicator representing the evolutionary state of the particles to control the equation coefficients adaptively. To determine the evolutionary factor, the mean particle distance di of the ith particle to other particles has to be calculated using (12). The evolutionary factor f is then computed according to (13).

∑ √∑(

(

) ⁄(

(12)

)

)

,

-

(13)

Particle swarm optimization algorithms with selective differential evolution... (Hui Sheng Lim)


98

ISSN:2089-4856

where dg is the mean particle distance of the global best particle, dmin and dmax are the minimum and maximum of the mean particle distances respectively. The inertia weight w can be calculated from evolutionary factor f using (14). The adaptation of the acceleration coefficients C1 and C2 can also be achieved using the evolutionary factor as shown in (15). ⁄(

) |

|

|

|

,

-

, where

(14) (15)

3.

METHODOLOGY: SELECTIVE DE HYBRIDIZATION Although DEPSO and DEQPSO algorithms are able to generate excellent solution qualities for AUV path planning, the hybridization of DE significantly increases the computational requirements of the algorithm due to the greedy selection operator used in the DE operation [17]. The greedy selection operator requires the fitness of the particles to be evaluated twice for comparison purposes, meaning an additional fitness evaluation for every particle in every iteration. As the fitness evaluation process usually contributes to the majority of the computational time [11], the greedy operator drastically increases the computational requirements of the algorithms. The increase in computational requirements due to the addition of greedy selection operator will be even more obvious when the complexity and the dimensionality of the problem increase [17]. In order to minimize the downside of DE operator in PSO-based algorithm, a selective hybridization scheme is proposed in this paper to present the following algorithms: - SDEPSO (PSO with selective DE hybridization) - SDEAPSO (PSO with adaptive factor and selective DE hybridization) - SDEQPSO (QPSO with selective DE hybridization) Using the selective scheme, these proposed algorithms apply the DE operation to a selected number of particles only, instead of the entire swarm. The number of particles selected for DE operation, NS, is controlled by a selective factor S as shown in (16). ,

-

(16)

The DE operation in the proposed algorithms was modified by replacing the greedy selection operator with a natural selection operator. The DE operation proposed in this paper initiates by sorting all the particles in the entire swarm according to their personal best positions. Next, a number of selected particles with the best fitness undergo the mutation and crossover operators, similar to those in DEPSO and DEQPSO, to generate the same number of trial vectors. The trial vectors are then subjected to a natural selection operator, in which the same number of particles with the worst fitness is replaced by the trial vectors. As only the worst particles are replaced in this process, all potentially best solutions will never deteriorate. Furthermore, the computational requirements of the algorithms will not be significantly affected because the natural selection operator does not involve fitness comparison between the particles, which requires additional particle fitness evaluation in every iteration. The DE operation with natural selection increases the diversity and the evolutionary rate of the entire swarm by eliminating the least desirable solutions, hence leading to a faster and better global convergence theoretically. The selective DE hybridization was applied to PSO and QPSO algorithms to develop the SDEPSO and SDEQPSO algorithms in this paper. In addition, another algorithm, namely SDEAPSO, was developed by adding an adaptive mechanism to the control of inertia weight and acceleration coefficients in PSO algorithm, similarly to the APSO algorithm. The implementation of SDEPSO, SDEAPSO and SDEQPSO algorithms in AUV path planning can be conducted as described in the following pseudocode. Step 1. Step 2.

Input the algorithm parameters and environmental information of the ocean field. Initialize particles with random positions in (1) to represent an initial group of candidate paths. Set pbest to be the current particle positions. Step 3. While the stop criteria is not met, Step 3.1 For t = 1, 2, …, tmax, SDEPSO SDEAPSO SDEQPSO Compute mbest according Evaluate the cost Evaluate the cost t t function f (Xi ). function f (Xi ). to (8). Update pbest and gbest Update pbest and gbest Evaluate the cost function f (Xi t). according to (4) and according to (4) and Update pbest and gbest (5) respectively. (5) respectively. Update w according to Update w, C1 and C2 according to (4) and (5) (6). according to (14) and respectively. (15) respectively. Update  according to (9).

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 94 – 112


Int J Rob & Autom

ISSN: 2089-4856

99

Step 3.2 For each particle i = 1, 2, …, N, SDEPSO SDEAPSO SDEQPSO Update particle velocity and Update particle velocity Update particle position according to (2) and position according to position according and (3) respectively. (2) and (3) respectively. to (7). End Step 3.3 Sort all particles according to the fitness of their personal best positions. Step 3.4 For k = 1, 2,…, NSth best performing particle, Mutation: Generate mutated vector Ukt according to (10). Crossover: Generate trial vector Tkt according to (11). Natural selection: Replace kth worst performing particle with trial vector Tkt. End End Step 4. Output gbest that holds the optimal path when the stop criteria is met or when tmax is reached.

3.1. Complexity Analysis The time complexity of the proposed algorithms can be measured by counting the number of primitive operations in the algorithm. By referring to the pseudocode of the proposed algorithms, the number of operations can be counted as follows: - In Step 2, initialization contributes one operation for N times. - In Step 3.1, cost function evaluation contributes one operation for N times; finding pbest requires N⋅ log(N) operations; finding gbest requires log(N) operations; updating coefficients contributes one operation; SDEQPSO requires N additional operations to calculate mbest. - In Step 3.2, SDEPSO and SDEAPSO perform N loops with 14 operations; SDEQPSO perform N loops with 12 operations. - Step 3.3 contributes log(N) operations. - Step 3.4 performs NS loops with 8 operations. Steps 1 – 3.2 are the standard operations in basic PSO, APSO and QPSO, whereas Step 3.3 and 3.4 are the additional operations introduced by the selective DE operator. O-notation is used in this work to denote the asymptotic upper bound of time complexity, which indicates the computational time of the algorithm in the worst case scenario. When computing the O-notation, the lower order terms in the number of operations is negligible because their impact on computational time are relatively insignificant for large input [28]. The highest order term in the operation is N⋅ log(N) in Step 3.1, and it performs tmax times to check the termination condition. The operations added by the selective DE operator (Step 3.3 and 3.4) are of lower order and do not have significant impact on the time complexity. Thus, the complexity of the proposed algorithms in linear form is O(N⋅ log(N)⋅ tmax), similar to the standard PSO algorithm. PSO-based algorithms have two inner loops when going through the population of N particles, and one outer loop for tmax iterations; this renders the time complexity to be O(N 2⋅ tmax) in the extreme case. The spatial complexity of the algorithms is O(N 2), which depends on the population size. 3.2. Benchmark Functions Metaheuristic algorithms such as the PSO-based algorithms can be evaluated empirically by comparing their performance in solving a set of objective function problems. In addition to the AUV path planning problem, a number of non-linear continuous function problems were used to study and benchmark the characteristics of the proposed algorithms. According to the “no free lunch” (NFL) theorem [29], the development and evaluation of an algorithm for a specific problem should be based on the benchmark function problems of similar class and properties, because the algorithm performance will not be consistent for every kind of problem. Thus, these benchmark functions were selected based on their resemblances to the properties of path planning problem. The selected benchmark functions should have the following properties: - Multimodal with deceptive local minima and one global minimum, because the path planning problem usually consists of multiple suboptimal paths and an optimal path. - Multi-dimensional, because the dimensionality of the path planning problem is dependent on the number of control waypoints along the path. Four test functions were chosen for benchmarking in this study. These minimization problem functions, which are commonly used to evaluate the characteristics of optimization algorithms, were found to exhibit the abovementioned properties. The information on the selected benchmark functions are given in Table 1. The dimensions of all functions are set to 20 in this study.

Particle swarm optimization algorithms with selective differential evolution... (Hui Sheng Lim)


100

ISSN:2089-4856 Table 1. Benchmark functions

Notation

Name

Function formulation

F1

Griewank [30]

F2

Rastrigin [31]

F3

Ackley [32]

F4

Schwefel [33]

( )

( )

( )

)

(

∑,

√∑

( )

(

(

)-

Boundary interval

Global minimum

[-600, 600]

f (x) = 0, at x = (0,…,0)

[-5.12, 5.12]

f (x) = 0, at x = (0,…,0)

[-32, 32]

)

.√| |/

[-500, 500]

f (x) = 0, at x = (0,…,0) f (x) = 0, at x = (420.9687,…, 420.9687)

3.3. Empirical Study on Parameter Selection In SDEPSO, SDEAPSO and SDEQPSO, the number of best performing particles that undergo the DE operation and the number of worst performing particles that will be replaced during the natural selection are determined by the selective factor S. Thus, S can be manipulated to control the diversity of the population. In order to study the effects of S on the algorithm performance, an empirical study is conducted on SDEPSO by using a range of S. The selective factor S is a positive number that is less than 1.0. Note that when S = 0, the algorithm will not be hybridized with DE at all; while S = 1 means the DE operation will be conducted on the entire swarm, and the entire swarm will be replaced during the natural selection, meaning all the solutions generated from the PSO operation will be discarded, which is undesirable. Therefore, the empirical study includes S values ranging from 0 to 0.9, meaning that 0%–90% of the particles will undergo the DE operation; the results for S = 0 are included for comparison purposes. Through a 1000-run Monte Carlo simulation with 100 (max) iterations and a population size of 150 particles, the performance of SDEPSO under different S settings is evaluated by solving the optimization problems of the benchmark functions and the path planning problem in 2D and 3D scenarios; the formulation of the path planning problem is described in Section 4. Prior to evaluating the algorithm performance, Shapiro-Wilk test was performed to examine the normality of the obtained simulation data. The normality test revealed that the data was not normally distributed. Hence, the median was used as the indicator for solution quality. The median of fitness obtained (Med.) and the best known fitness (Best) for each setting of S were obtained for all problems and tabulated in Table 2. A lower fitness value indicates a higher solution quality and hence a stronger search ability. Table 2. Empirical study results Selective factor, S 0 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90

F1 Med 0.86 0.58 0.56 0.63 0.68 0.66 0.73 0.80 0.87 1.00

F2 Best 0.25 0.13 0.13 0.19 0.34 0.30 0.14 0.34 0.43 0.85

Med 1.28 1.22 1.20 1.15 1.29 1.27 1.41 1.61 1.59 1.77

F3 Best 0.41 0.42 0.50 0.21 0.51 0.40 0.52 0.50 0.84 0.61

Med 0.19 0.15 0.15 0.17 0.23 0.26 0.32 0.47 0.74 1.67

F4 Best 0.06 0.06 0.07 0.05 0.08 0.12 0.11 0.10 0.26 0.43

Med 2.61 2.22 2.08 1.89 1.90 1.71 1.70 1.71 1.51 1.27

Best 1.54 1.20 0.69 0.85 0.81 0.65 0.63 0.60 0.57 0.48

2D path planning (×102) Med Best 3.07 2.97 3.06 2.99 3.01 2.97 2.98 2.91 3.06 2.96 3.12 3.02 3.05 2.98 3.00 2.98 3.05 2.97 3.08 2.97

3D Path planning (×102) Med Best 3.36 3.30 3.20 3.14 3.34 3.13 3.14 3.18 3.30 3.15 3.44 3.15 3.42 3.18 3.33 3.19 3.22 3.19 3.35 3.25

The best-performing results for each of the problems are in bold in Table 2. It can be observed from the results that the behaviour of the algorithms varies greatly as S increases, and the variations are not consistent for all problems. The best results for the majority of the problems are identified to be in the range of S = 0.1 – 0.3, except for problem F4. Such results can be explained by the geometry of the Schwefel function F4, which has all its local minima and the global minimum spread far apart from one another. Effective optimization of this function requires an algorithm that promotes larger solution diversity (higher S), so that it provides a jumping-out ability to prevent trapping in deceptive local minima. This actually complies with the NFL theorem, which suggests that no single algorithm can generate better performance than any other algorithms for every problem. In fact, the improved algorithm performance in one class of problem is not necessarily consistent in all kinds of problems; instead, it is exactly traded with performance in another class of problem [29]. Although all the function problems selected for benchmarking purposes Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 94 – 112


Int J Rob & Autom

ISSN: 2089-4856

101

have similar properties (they are all multimodal and multi-dimensional), the geometry of the problems are different. Therefore, the setting of S should be adjusted accordingly for different optimization problems. Based on this empirical study, it can be deduced that the optimal setting of S for the majority of the tested problems is in the range of 0.1 – 0.3. More specifically for the path planning problem, the setting of S = 0.3 was found to be appropriate and effective. 3.4. Benchmark Study The benchmark functions were used to evaluate and benchmark the proposed algorithms in this study. Through a 1000-run Monte Carlo simulation with 100 (max) iterations and a population size of 150 particles, the performances of the proposed algorithms in solving the optimization problems of the four benchmark functions were compared with other existing PSO-based algorithms. At each run, the initial particle positions for all problems were randomly generated based on the uniform distribution within the boundary intervals given in Table 3. As the data was not normally distributed according to the Shapiro-Wilk test, the Kruskal-Wallis test [34], which is a non-parametric ANOVA (analysis of variance), was used with a significance level of 0.05 to rank the algorithm performance based on the solution qualities (fitness obtained). The ranking procedure used the Holm–Bonferroni „stepdown‟ approach [35], which is best suited for all pairwise comparisons when the confidence intervals are not needed and sample sizes are equal [11]. The algorithms are given the same rank if they are not statistically different from one another. The medians (Med.) of fitness obtained, the ANOVA ranks (#R) and the medians of computational time required were tabulated in Table 3. The medians of the top two best-performing results for each problem are in bold. The overall performances of the algorithms are given by their total ranks, which are calculated from the summation of the ranks of the algorithm for all problems. Based on the results, it can be seen that there is no single algorithm that can achieve the best results for all problems; this observation agrees with the NFL theory. For the Griewank function (F1), DEQPSO produced the best result. In fact, APSO, SDEAPSO, QPSO, DEQPSO, and SDEQPSO algorithms were found to be producing satisfactory results, indicating that the adaptive mechanism and quantum behaviour of the particles are beneficial for solving this problem. DEPSO and SDEPSO algorithms produced equally good performance for the Rastrigin function (F2). For the Ackley function (F3), the QPSO-based algorithms, i.e. QPSO, DEQPSO and SDEQPSO produced the best performance, followed by the adaptive PSO-based algorithms, i.e. APSO and SDEAPSO. As far as the Schwefel function (F4) is concerned, only DEPSO, SDEPSO and SDEAPSO are able to generate satisfactory results, while all the other algorithms seem to have inferior performances. The total ranking of the algorithms reveal that DEQPSO achieved better overall performance than other algorithms. The second-best performing algorithms are found to be DEPSO and SDEAPSO. Most importantly, the results for all problems show that the fully DE-hybridized algorithms, i.e. DEPSO and DEQPSO required significantly higher computational time to obtain the solutions, while the selectively DE-hybridized algorithms are able to maintain a reasonably similar computational requirement as the PSO, QPSO and APSO algorithms.

Table 3. Benchmark study results Algorithm PSO QPSO APSO DEPSO DEQPSO SDEPSO SDEAPSO SDEQPSO

Med 0.658 0.089 0.100 0.634 0.064 0.629 0.098 0.072

F1 #R 8 3 4 6 1 6 4 2

T(s) 0.102 0.160 0.155 0.427 0.510 0.108 0.161 0.177

Med 1.372 1.791 1.219 1.140 2.092 1.149 1.196 2.125

F2 #R 5 6 4 1 7 1 3 7

T(s) 0.123 0.150 0.162 0.548 0.502 0.135 0.157 0.181

Med 0.453 0.005 0.041 0.166 0.002 0.172 0.035 0.002

F3 #R 8 1 5 6 1 6 4 1

T(s) 0.104 0.166 0.177 0.419 0.490 0.177 0.181 0.191

Med 3.617 4.555 3.606 1.781 3.023 1.891 2.031 3.594

F4 #R 5 8 5 1 4 2 3 5

T(s) 0.125 0.187 0.202 0.470 0.555 0.199 0.273 0.271

Total Rank 26 18 18 14 13 15 14 15

4.

PROBLEM FORMULATION FOR PATH PLANNING The AUV path planning problem is formulated in this section. Throughout the formulation, the AUV is assumed to have constant thrust, and hence constant water reference velocity. 4.1. Path Formulation In this paper, the primary objective of the AUV path planner is to solve a multimodal non-linear optimization problem, in which the optimal path among a group of potential paths for the AUV to travel Particle swarm optimization algorithms with selective differential evolution... (Hui Sheng Lim)


102

ISSN:2089-4856

towards a target location through the ocean environment is required to be determined. Each potential path of the AUV comprises a series of nodes along the path from the start point to the target (end) point. Controlling and optimizing the coordinates of the path nodes will yield the optimized path for the AUV. The start point and the endpoint of the path are not involved in the optimization because all the potential paths share the same start and end locations. In PSO-based algorithm, each potential path solution for the problem is modelled as an individual particle in the swarm population. The swarm population is denoted by a matrix X = [X1, X2,…, XN]T, where X is the position vector of the particles and N is the number of particles in the swarm. The entries of the position vectors for the particles represent the coordinates of the path nodes. Assuming every path consists of n+2 nodes including the start point and endpoint, the number of nodes involved in the optimization is n. In order to record the coordinates of n nodes, the entries of the position vector for a particle in 2D problem space will have 2n dimensions, while a particle in 3D will have 3n dimensions. Thus, the respective position vectors of the ith particle at tth iteration for 2D and 3D problems are: [

]

*

+

(17)

[

]

*

+

(18)

Based on the path nodes including the start and end points, B-spline geometry is used to construct the AUV path. B-spline are parametric curves generated from a series of connected piecewise polynomials [36], which are suitable for modelling the AUV path because of its continuity for smooth path and locality for path alteration without loss of continuity. The path nodes act as the control points for the B-spline curve according to the following curve function, which gives the output vector P(u) representing a B-spline curve with k+1 order in the form of discretised waypoints. Given the total number of control points is n+2, the total number of piecewise polynomials is one less than the number of control points, which is n+1. ( )

( )

*

+

(19)

where xi are the control points, u is the non-decreasing knot sequence contained in a knot vector U = [u0, …, ui, …, un+k+2], and Bi,k (u) are the piecewise polynomial basis functions of k degree defined by Cox de Boor recursion [37] as follows. ( ) ( )

{

(20) ( )

( )

(21)

The continuity of the spline is fully dependent on the basis functions. Hence, it can be noted from (19) that the control points, i.e. path nodes can be adjusted during the path optimization process without affecting the spline continuity. 4.2. Evaluation Functions When implementing PSO-based algorithms in an optimization problem, it is critical to develop the suitable cost evaluation functions to measure the fitness of the particles based on their respective solutions. Due to the high computational efficiency of PSO-based algorithms, the evaluation functions usually contribute to the majority of the computational time [11]. The functions are developed based on the optimization criteria of the problem. They must closely resemble the physical conditions of the problem space to provide an accurate cost representation model for finding the optimal solution. For path planning, which is a minimization problem, a lower cost/fitness indicates a better solution. The main criteria for evaluating the AUV path are: - Minimum length or travel time required to reach the target - Minimum exposure to the threats - Compliance with physical motion limitations of AUV As the optimum of all criteria does not necessarily coincide, a trade-off between these criteria can be established using a weighting scheme with multiple evaluation functions, which include a main evaluation function to measure the path length/time cost, a function to measure the threat cost along the path, and Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 94 – 112


Int J Rob & Autom

ISSN: 2089-4856

103

functions to measure the compliance of the path with respect to the AUV motion limitations. Thus, the fitness of a particle/path Xi can be given by a combination of several evaluation functions Fk for different criteria, with each criterion weighted by a cost factor fk. (

)

(

*

)

+

(22)

where k refers to different evaluation functions and K is the total number of functions for the problem. 4.3. Path Travel Time Cost The main evaluation function for path planning problem is to measure the path cost based on its length or time to travel on the path. This study focuses on finding an optimal path that is capable of taking advantage of favourable current to assist the AUV motion, while avoiding the less favourable current to achieve a shorter travel time. For this purpose, a travel-time-based evaluation function is developed in this study. Based on previous formulation, a given path Xi can be represented as a series of path nodes or alternatively in the form of discretised waypoints P = [pi,1, pi,2, … , pi,m ], where P is the output from B-spline function and m is the total number of discretised waypoints. The travel time cost F1 along a path can be determined by finding the sum of discretised time required to travel on each small path segment that connects the consecutive discretised waypoints in P. ( )

‖⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗ ‖

*

| |

+

(23)

where Vg is the ground reference velocity of the AUV, which is the resultant AUV velocity under the effect of surrounding ocean current. The contribution of current on the AUV can be obtained by projecting the current velocity Vc in the direction of the AUV water reference velocity Va, which is essentially the direction of the path vector. Thus, Vg is given by the sum of Va and the contribution of Vc as shown in (24). ⋅ ⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗

(24)

‖⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗ ‖

4.4. Threat Cost The obstacles avoidance ability of the path planner relies on the threat cost evaluation function, in which the exposure of the path to threats/obstacles is measured. All threats in the problem space are modelled as ellipses (or circles if the major axis and minor axis are equal) under 2D condition, and as ellipsoids (or spheres if all the principal axes are equal) under 3D condition with their principal axes aligned with the coordinate axes. A threat cost evaluation method based on the intersection between the path and the threats is employed in this study. Assuming a threat h in 3D problem space with centre Oc,h = (Ocx, Ocy, Ocz) and semi principal axes Or,h = (Orx, Ory, Orz), its parametric equation can be expressed in (25). The parametric equation of a path segment that connects two consecutive waypoints pi, j = (x1, y1, z1) and pi, j+1 = (x2, y2, z2) can be written as (26). The cost evaluation in 2D takes a similar approach, except that the dimension reduction in 2D reduces the number of variables and hence simplifies the computation. (

( )

)

( )

(

)

(

(

)

)

(25)

(26)

Substituting (26) into (25) yields the following equations, which are expressed in terms of s. The intersection of the path with the threat can be evaluated by obtaining the discriminant ξ of (27) according to (31). (27) Particle swarm optimization algorithms with selective differential evolution... (Hui Sheng Lim)


104

ISSN:2089-4856 (

(

)

)(

(

)

(

)

(

(

(

)

)(

(

)

)

)

(

)(

)

(28) )

(29)

(30) (31)

A safety margin is added to the principal axes of all threat regions so that the AUV will not conflict with the threat when ξ = 0, i.e. the path is tangent to the threat region. When ξ > 0, the path will conflict with the threat if the roots s1 and s2 given by (32) are within the range of 0 ≤ s1, s2 ≤ 1. √

(32)

If the path conflicts with the threat, the threat cost will be proportional to the length of the path segment contained in the threat region as given in (33). The intersection points, S1 and S2 can be determined by solving (27) using the obtained s1 and s2, and substituting them back into (26). ( )

∑∑

‖⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗ ‖ (

(33)

)

4.5. Physical Motion Limitations The considerations for physical motion limitations of AUV should include its yaw (turning) and pitch motions at a given forward speed. Evaluation functions are developed to check the compliance of the path with respect to these limitations, and to penalise the cost if any of the limitations is violated. To check the path compliance with the yaw limitation, the turning angle of the path in the x-y plane is measured and compared against the maximum allowable turning angle ψmax. Considering two consecutive path segments that consist of three waypoints pi, j , pi, j+1 and pi, j+2 (refer to Figure 1), the turning angle ψ can be obtained from the cosine function as shown in (34). The first part of the function is the scalar projection of the second path segment on the first segment in the x-y plane, while the second part is the length of the second path segment in the x-y plane.

[

‖⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗ ‖ ⋅ ‖⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗ ‖ ‖⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗ ‖

‖⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗ ‖

]

Figure 1. Yaw angle and pitch angle of a path Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 94 – 112

(34)


Int J Rob & Autom

ISSN: 2089-4856

105

The cost F3 for violating the yaw limitation is obtained from the calculated turning angle as shown in (35). ( )

(

) (35)

(

)

{

(

|

|)⁄(

)

|

|

|

|

For the pitch motion, the instantaneous pitch angle θ and the change in pitch θ of the AUV at any point should not exceed their respective maximum values (θmax & θmax). Referring to Figure 1, θ can be determined using the basic tangent function as shown in (36). Next, θ can be calculated using (37).

[

‖⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗ ‖ ‖⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗​⃗ ‖

]

(36)

(37) From the calculated pitch, the cost F4 for violating θmax and the cost F5 for θmax can be obtained as shown in (38) and (39) respectively. ( )

(

) (38)

(

)

{

( ) (

)

{

| | (

| |)⁄(

(

(

|

)

| |

)

|)⁄(

)

|

|

|

|

(39)

5.

SIMULATIONS The performance of the proposed algorithms is evaluated in the AUV path planning problem under different scenarios in this section. 5.1. Simulation Setup The path planning of the AUV was conducted in a 1000-run basis Monte Carlo simulation under a 2D scenario, followed by a 3D scenario. The machine used has Intel Core i5-6300U CPU @ 2.4GHz with 8GB RAM. The problem spaces of the simulations were assumed to be a current field that consists of 500×500 square grids for 2D, and 500×500×500 cube grids for 3D, with each side of the grid equivalent to 1 metre. Non-uniform ocean current and static obstacles of different sizes are present in the problem space. The current field was generated based on the data obtained from the field experiment conducted at Beauty Point, Tasmania, Australia. The AUV is required to travel from a starting point to a target with a pre-set water reference velocity of 1.5m/s. Based on the properties of REMUS 100 AUV, the safety margin used in the threat computation is set to 1 metre, while the angles ψmax, θmax and θmax are set to 30, 45 and 10 respectively. The cost factor for the path travel time f1 was set to be 1.0, and the other cost factors f2 – f5 were all set to be 0.25, so that all costs except the travel time cost have similar impact on the solutions. Hence, when the path solution is not violating the threat exposure (f2) and the physical motion limitations (f3 – f5), the fitness value of the solution directly represents the time required for the AUV to travel on the corresponding path. In each simulation run, the maximum number of iterations for the algorithm was set to 100 with a pre-defined stopping threshold. This means the algorithm will be iterated up to a maximum number of 100, but will be stopped whenever the solution difference between iterations is less than the pre-set threshold. Particle swarm optimization algorithms with selective differential evolution... (Hui Sheng Lim)


106

ISSN:2089-4856

The population size of all algorithms was set to 150 particles, with each particle consisting of 5 path nodes, meaning each particle has 10 dimensions for the 2D problem and 15 dimensions for the 3D problem. All algorithm parameters were set to be their respective suggested values as discussed in Section 2. For comparison purposes, another path planning technique, RRT* and other metaheuristic algorithms, including Ant Colony Optimization (ACO) [38], Firefly Algorithm (FA) [16], Differential Evolution (DE) and Genetic Algorithm (GA) [9], are also tested in this study. 5.2. Simulation Results The performances of the algorithms are compared based on the following properties: solution qualities, stabilities, convergence behaviours, and computational requirements. These properties can be evaluated by studying the fitness values of the solutions obtained and the computational time required to obtain the solutions. The fitness value of a solution is simply the time required (cost) for the AUV to reach the endpoint from the starting point by travelling on the path corresponding to the solution. Therefore, a lower fitness value indicates a higher solution quality and hence a stronger search ability. The Monte Carlo simulation results of the 2D and 3D scenarios are graphed and compared in boxplots as shown in Figure 2 and Figure 3. The data was not normally distributed based on the Shapiro-Wilk normality test. In the boxplots, the medians of the data are represented by the red horizontal line; the blue boxes indicate the range of 25th to 75th percentile; the black whiskers indicate the acceptable data range. For the boxplots of fitness values, the extreme lowest end of each whisker gives the individual best fitness obtained by each algorithm over the 1000-run simulation, and the green cross sign represents the best known (lowest) fitness value among all algorithms in the simulations. The acceptable data ranges and percentile ranges are indicators for the stabilities of the algorithms performances, while the medians give information about the solution qualities and search abilities of the algorithms.

Figure 2. Boxplot of fitness values in 2D scenario

Figure 3. Boxplot of fitness values in 3D scenario

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 94 – 112


Int J Rob & Autom

ISSN: 2089-4856

107

The Kruskal-Wallis ANOVA procedure with a significance level of 0.05 was used to rank the solution qualities (fitness values) based on the Holm–Bonferroni stepdown method. The algorithms are given the same rank if they are not statistically different from one another. Detailed results of the path planning simulation, including the median of fitness obtained (Med.), the best known fitness (Best), the interquartile range (IQR), the ANOVA rank (#R), the median of computational time (T) and the total ranks, are tabulated in Table 4. The total ranks are calculated from the summation of the ranks for the 2D and 3D scenarios. The ranking of the algorithms does not consider the impact of computational time. Based on Figure 2, Figure 3 and Table 4, almost all the PSO-based algorithms have better solution quality than RRT* and other metaheuristic algorithms, with the exception of standard PSO being outperformed by FA. Despite having lower solution quality, RRT* has the shortest computational time in both 2D and 3D scenarios. It can also be seen that all variants of PSO and QPSO produced better solution qualities than the standard PSO and QPSO. DEPSO and DEQPSO outperformed all other algorithms by achieving the lowest medians for fitness value in both 2D and 3D. The total ranks of DEPSO and DEQPSO suggest that the two fully DE-hybridized algorithms are able to produce the top two best solution qualities for path planning problem. However, the computational time of DEPSO and DEQPSO are significantly higher than all the other algorithms due to the high computational requirements of the greedy selection operator.

Table 4. Path planning simulation results 2D Algorithm RRT* ACO FA GA DE PSO QPSO APSO DEPSO DEQPSO SDEPSO SDEAPSO SDEQPSO

Med. (×102) 3.25 3.24 3.11 3.13 3.21 3.10 3.09 3.01 2.90 2.89 2.98 2.99 2.94

Best (×102) 3.14 3.12 3.02 2.98 3.05 3.00 3.00 2.92 2.85 2.85 2.91 2.92 2.90

3D

IQR

#R

T(s)

9.4 8.0 6.2 6.3 6.7 5.4 6.4 1.3 5.9 3.7 7.7 6.2 7.8

11 13 8 10 11 8 7 5 1 1 6 3 3

4.8 9.4 9.2 12.3 12.8 10.7 9.9 10.8 22.4 20.8 12.8 14.9 13.7

Med. (×102) 3.48 3.46 3.28 3.33 3.41 3.35 3.27 3.20 3.09 3.07 3.18 3.14 3.13

Best (×102) 3.37 3.29 3.21 3.23 3.34 3.21 3.19 3.17 3.04 3.03 3.14 3.10 3.10

IQR

#R

T(s)

10.9 17.6 7.7 11.7 15.5 12.1 13.2 2.6 5.9 3.7 8.8 3.7 4.7

11 11 7 9 11 9 7 5 1 1 5 4 3

14.3 41.3 41.2 48.3 53.6 34.6 30.9 37.7 69.0 76.7 35.7 38.8 37.6

Total Rank 22 24 15 19 22 17 14 10 2 2 11 7 6

The solution qualities of SDEAPSO and SDEQPSO are second to the fully DE-hybridized algorithms; they are ranked similarly in 2D based on the ANOVA ranking. APSO has better solution quality than SDEPSO in 2D. It is worth noting that APSO has the lowest interquartile range is both 2D and 3D, indicating the highest stability among all the algorithms. In the 3D scenario, SDEQPSO is ranked slightly higher than SDEAPSO, while SDEPSO is ranked similar to APSO. The total ranks of the overall performance in both 2D and 3D reveal that SDEQPSO and SDEAPSO are ranked as the third and the fourth respectively. More importantly, the computational times of the two selectively DE-hybridized algorithms are significantly lower than the fully DE-hybridized algorithms and very close to other PSO-based algorithms. These indicate the higher computational efficiency of SDEQPSO and SDEAPSO in solving the path planning problem because they are able to produce solution quality that is very close to DEPSO and DEQPSO, while having a significantly lower computational requirement. In terms of problem size, the computational time required by the path planner is considered short, particularly in comparison to the computational time required for estimating the ocean environment based on the AUV sensory measurements.

6.

VEHICLE PATH VALIDATION For validation purposes, the path solutions generated by the AUV path planner were used as a reference trajectory for a dynamic model of REMUS 100. This section briefly explains the dynamic model and the path following controller used. 6.1. Dynamic Model Based on Fossen‟s vectorial representation [39] and SNAME (Society of Naval Architects and Marine Engineers) standard formulation, the 6 DOF equation of motion for a typical AUV can be modelled as shown in (40) and (41).

Particle swarm optimization algorithms with selective differential evolution... (Hui Sheng Lim)


108

ISSN:2089-4856

̇

[

( ) ( ) ( )

̇

]

( )

(40) ( )

(41)

where R (η2) and T (η2) are the rotation matrices between inertial and body-fixed reference frames for the translational velocities and angular velocities respectively. η includes the position η1 and the orientation η2 of the vehicle with respect to the inertial reference frame, while the derivative of η in (40) represents its rate of change. includes the translational velocities and the rotational velocities of the vehicle with respect to the body-fixed reference frame as described in the vectors in (42). , ,

-

, ,

- , -

(42)

In (41), M and C( ) describe the inertial and Coriolis matrices (including rigid body and added mass) respectively, D( ) is the hydrodynamics damping matrix, g(η) is the hydrostatics restoring forces, and τ describes the control forces from the actuators. This study uses the REMUS 100 model derived from (40)–(42) by [40]. The hydrodynamics coefficients calculated in [40] are used in the vehicle model. 6.2. Path Following Controller The path following controller of the AUV model used the integral line-of-sight (iLOS) guidance law to set the yaw and pitch angles for following the trajectory generated by the path planner. The iLOS guidance law described by [41] allows the AUV to shape the convergence towards the path in the presence of ocean current and environmental disturbance. The desired iLOS yaw angle (heading) ψd and pitch angle θd can be given as follows: ( )

(

) (43)

̇

( ( )

) (

) (44)

̇ (

)

where e is the cross-track error, h is the vertical-track error, Ki,y and Ki,z are the integral gains, and Δy and Δz represent the look-ahead distances for iLOS heading and pitch respectively. The integral terms of cross-track error eint and vertical-track error hint will produce non-zero ψd and θd even when the AUV is on the planned path, allowing the vehicle to counteract any effects of ocean current with the necessary side-slip and pitch angles. The rates of integral terms ėint and int will reduce the integral action with large cross-track and vertical-track errors (i.e. vehicle is far from the planned path), in order to minimize the risk of integrator wind-up. 6.3. Validation Results The feasibility of the path solutions is first checked by comparing against the motion limitation of REMUS 100, which has a minimum turning radius of 8.1 metres in the worst case scenario [42]. The curvature radius of a feasible path must be higher than the minimum turning radius. The paths generated by SDEQPSO satisfy the AUV motion limitation as shown in Figure 4. Next, the 2D and 3D solutions generated by SDEQPSO were validated by comparing against the simulated paths in Figure 5. The AUV is required to travel from the starting point (green square) to the target (pink star) without running into obstacles, while trying to take advantage of the favourable current to assist the AUV motion. In the 2D results, the blue-coloured regions indicate the favourable current while the red-coloured regions denote the less favourable current. In both results, the solid sections of the planned paths indicate that the favourable ocean current has a positive effect on the AUV motion while the dashed sections suggest otherwise. It can be observed that the paths are able to follow the favourable current and avoid the less favourable current to achieve a shorter travel time. The simulated paths closely resemble the planned paths in both scenarios. Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 94 – 112


Int J Rob & Autom

ISSN: 2089-4856

(a)

109

(b)

Figure 4. Curvature radius of planned path for (a) 2D and (b)3D

(a)

(b)

Figure 5. Validation of path solution in (a) 2D scenario and (b) 3D scenario

The cross-track errors of the simulated paths relative to the planned paths for the 2D and 3D scenarios are graphed in Figure 6. The errors for both scenarios are well below 1 metre, proving that the AUV was able to follow the planned paths closely. Hence, the simulation results showed that the path solutions generated by the proposed algorithm are smooth and feasible for the path planning application.

Figure 6. Cross-track error of simulated path relative to planned path 7.

CONCLUSION By selectively hybridizing with differential evolution, this paper presents new variants of PSO algorithm with improved search ability for the global minimum path of an AUV without increasing the computational requirements. The proposed algorithms were benchmarked against other algorithms in an offline AUV path planner because if the proposed algorithms can provide better computational efficiency Particle swarm optimization algorithms with selective differential evolution... (Hui Sheng Lim)


110

ISSN:2089-4856

to demonstrate the minimum capability of a path planner, then they will outperform the tested algorithms in the online path planner. Based on the Monte Carlo simulations and ANOVA procedures, the SDEAPSO and SDEQPSO algorithms were able to achieve a similar performance to DEPSO and DEQPSO algorithms in terms of solution quality and stability, while having a significantly lower computational requirement. Most importantly, the simulation results showed that the planned paths in both the 2D and 3D scenarios are smooth, feasible and able to account for a priori known environment. The PSO-based algorithms proposed in this study are most efficient for solving nondeterministic polynomial time (NP)-hard problem, such as the path planning problem. Although the simulation assumed a priori known environment to represent the minimum capability of a path planner, the algorithms can be adapted to a more realistic operational condition in future work due to the demonstrably high computational efficiency, which is suitable for solving compute-intensive problems such as path re-planning in highly dynamic environments. The future extension of this work will include developing a path re-planning algorithm that can deal with a priori unknown environment.

ACKNOWLEDGEMENTS The present work is supported by the Tasmania Graduate Research Scholarship provided by Australia Maritime College (AMC). The authors acknowledge the AUV team in AMC for providing the data from their field experiment at Beauty Point, Tasmania, Australia.

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18]

[19] [20]

T. Xue, et al., “Trajectory planning for autonomous mobile robot using a hybrid improved QPSO algorithm,” Soft Computing, vol. 21, no. 9, pp. 2421-37, 2017. Z. Zeng, et al., “A comparison of optimization techniques for AUV path planning in environments with ocean currents,” Robotics and Autonomous Systems, vol. 82, pp. 61-72, 2016. D. Youakim and P. Ridao, “Motion planning survey for autonomous mobile manipulators underwater manipulator case study,” Robotics and Autonomous Systems, vol. 107, pp. 20-44, 2018. D. Kruger, et al., “Optimal AUV path planning for extended missions in complex, fast-flowing estuarine environments,” in Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007. D. Ferguson and A. Stentz, “Using interpolation to improve path planning: The Field D* algorithm,” Journal of Field Robotics, vol. 23, no. 2, pp. 79-101, 2006. C. Petres, et al., “Path Planning for Autonomous Underwater Vehicles,” IEEE Transactions on Robotics, vol. 23, no. 2, pp. 331-41, 2007. D. Rao and S. B. Williams, “Large-scale path planning for underwater gliders in ocean currents,” in Australasian Conference on Robotics and Automation (ACRA), 2009. J. D. Hernández, et al., Online motion planning for unexplored underwater environments using autonomous underwater vehicles,” Journal of Field Robotics, vol. 36, no. 2, pp. 370-96, 2019. A. Alvarez, A. Caiti, and R. Onken, “Evolutionary path planning for autonomous underwater vehicles in a variable ocean,” IEEE Journal of Oceanic Engineering, vol. 29, no. 2, pp. 418-29, 2004. J. Witt and M. Dunbabin, “Go with the flow: Optimal AUV path planning in coastal environments,” in Australian Conference on Robotics and Automation, 2008. J. Sun, C. H. Lai, and X. J. Wu, Particle Swarm Optimisation Classical and Quantum Perspectives. Boca Raton, FL: CRC Press, 2012. Y. Qin, et al., “Path planning for mobile robot based on particle swarm optimization,” Robot., vol. 26, no. 3, pp. 222-5, 2004. Y. Fu, et al., “Path planning for UAV based on quantum-behaved particle swarm optimization,” in Proceedings of SPIE - The International Society for Optical Engineering, 2009. Z. B. Shi, et al., “Path planning for mobile robot based on quantum-behaved particle swarm optimization,” Harbin Gongye Daxue Xuebao/Journal of Harbin Institute of Technology, vol. 42(SUPPL. 2), pp. 33-7, 2010. Z. Zeng, et al., “Efficient Path Re-planning for AUVs Operating in Spatiotemporal Currents,” Journal of Intelligent & Robotic Systems, vol. 79, no. 1, pp. 135-53, 2015. S. MahmoudZadeh, et al., “Online path planning for AUV rendezvous in dynamic cluttered undersea environment using evolutionary algorithms,” Applied Soft Computing, vol. 70, pp. 929-45, 2018. H. S. Lim, et al., “Performance evaluation of particle swarm intelligence based optimization techniques in a novel AUV path planner,” in 2018 IEEE OES Autonomous Underwater Vehicle Symposium, 2018. Y. Fu, et al., “Route Planning for Unmanned Aerial Vehicle (UAV) on the Sea Using Hybrid Differential Evolution and Quantum-Behaved Particle Swarm Optimization,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 43, 2013. Z. H. Zhan, et al., “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), vol. 39, no. 6, pp. 1362-81, 2009. R. Eberhart and J. Kennedy, “A new optimizer using particle swarm theory,” in Proceedings of the 6th International Symposium on Micro Machine and Human Science, 1995.

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 94 – 112


Int J Rob & Autom

ISSN: 2089-4856

111

[21] Y. Shi and R. C. Eberhart, “Empirical study of particle swarm optimization,” in CEC 99 Proceedings of the 1999 congress, 1999. [22] Y. Shi and R. C. Eberhart, “A modified particle swarm optimizer” in Evolutionary Computation Proceedings, 1998 IEEE World Congress on Computational Intelligence, The 1998 IEEE International Conference, 1998. [23] Y. Shi and R. C. Eberhart, “Parameter selection in particle swarm optimization,” International conference on evolutionary programming, 1998. [24] Y. Shi and R. C. Eberhart, “Comparing inertia weights and constriction factors in particle swarm optimization. Evolutionary Computation,” in 2000 Proceedings of the 2000 Congress, 2000. [25] Y. Shi, “Particle swarm optimization: developments, applications and resources. evolutionary computation,” in 2001 Proceedings of the 2001 Congress, 2001. [26] J. Sun, B. Feng, and W. Xu, “Particle swarm optimization with particles having quantum behavior,” in Proceedings of the 2004 congress on evolutionary computation, 2004. [27] W. J. Zhang and X. F. Xie, “DEPSO: hybrid particle swarm with differential evolution operator,” Systems, Man and Cybernetics, 2003 IEEE International Conference, 2003. [28] B. Raphael and I. F. Smith, Fundamentals of computer-aided engineering. John wiley & sons, 2003. [29] D. H. Wolpert and W. G. Macready. “No free lunch theorems for optimization,” IEEE transactions on evolutionary computation, vol. 1, no. 1, pp. 67-82, 1997. [30] M. Locatelli, “A note on the Griewank test function,” Journal of global optimization, vol. 25, no. 2, pp. 169-74, 2003. [31] H. Mühlenbein, M. Schomisch, and J. Born, “The parallel genetic algorithm as function optimizer,” Parallel computing, vol. 17, no. 6-7, pp. 619-32, 1991. [32] D. H. Ackley, “The model,” in A Connectionist Machine for Genetic Hillclimbing. Boston, MA: Springer, 1987, pp. 29-70. [33] T. Bäck, H.-P. “Schwefel An overview of evolutionary algorithms for parameter optimization,” Evolutionary computation, vol. 1, no. 1, pp. 1-23, 1993. [34] P. E. McKight and J. Najab, “Kruskal‐wallis test,” in The corsini encyclopedia of psychology, 2010. [35] J. Hochberg and A. C. Tamhane, Multiple comparison procedures. John Wiley & Sons, 1987. [36] L. Piegl and W. Tiller, The NURBS book. Springer Science & Business Media, 2012. [37] C. de Boor, A practical guide to splines. Verlag New York: Springer, 1978. [38] S. Mirjalili, J. S. Dong, A. Lewis, “Ant colony optimizer: Theory, literature review, and application in AUV path planning,” Studies in Computational Intelligence, pp. 7-21, 2020. [39] T. I. Fossen, Guidance and Control of Ocean Vehicles. Norway: John Wiley & Sons, 1999. [40] T. T. J. Prestero, “Verification of a six-degree of freedom simulation model for the REMUS autonomous underwater vehicle,” Massachusetts Institute of Technology, 2001. [41] W. Caharija, et al., “Path following of underactuated autonomous underwater vehicles in the presence of ocean currents,” in 2012 IEEE 51st IEEE Conference on Decision and Control (CDC), 2012. [42] Y. H. Eng, et al., “Online System Identification of an Autonomous Underwater Vehicle Via In-Field Experiments,” IEEE Journal of Oceanic Engineering, vol. 41, no. 1, 2015.

BIOGRAPHIES OF AUTHORS Hui Sheng Lim received the Bachelor‟s degree in Marine and Offshore Engineering in 2016 from University of Tasmania, Australia, where he is currently working toward the Ph.D. degree. His current research interests include optimal guidance, navigation and control system of autonomous underwater vehicle (AUV) using swarm intelligence.

Shuangshuang Fan received a B.E. in Mechanical Engineering from Shandong University, Jinan, China, in 2008, and a PhD in Mechatronic Engineering from Zhejiang University, Hangzhou, China, in 2013. From 2013 to 2014, she was a Research Engineer with the Institute of Shanghai Aerospace Control Technology. She was with the Acoustic Signal Processing Lab, Zhejiang University, as a Postdoctoral Researcher from 2014 to 2017. She was a lecturer at Australian Maritime College, University of Tasmania. Her research interests include the navigation, control and path planning of underwater vehicles in dynamic environments.

Particle swarm optimization algorithms with selective differential evolution... (Hui Sheng Lim)


112

ISSN:2089-4856 Christopher Chin commenced his academic career as a lecturer at RMIT University whilst undertaking his doctoral studies in Mathematics in 2003. He is currently a mathematician and a senior lecturer at the University of Tasmania, Australia. He works within the National Centre of Maritime Engineering and Hydrodynamics at the Australian Maritime College. He works closely with the Maritime Engineers focusing on Maritime Education and alternative energies. He is currently focusing on investigating the potential use of alternative energies in the maritime offshore sector in order to address the depletion of fossil fuels.

Shuhong Chai completed her Masters Degree in Naval Architecture at the Dalian University of Technology in China. In 2006 she obtained her doctoral degree from Universities of Strathclyde and Glasgow. She was then a senior hydrodynamics consultant and project manager with Oceanic Consulting Corporation in Canada. In 2008, Dr Chai joined AMC as a Senior Lecturer. She has since undertaken senior administrative roles including Director of the National Centre for Maritime Engineering and Hydrodynamics, and Associate Dean-Learning and Teaching for the AMC, Associate Dean Global of College of Sciences and Engineering and AMC Principal. She is very active in the subsea and underwater technology field. She is recently appointed as a member of the Committee of Loads of International Ship and Offshore Structures Congress (ISSC) from 2018 to 2021. She has been a specialist member of Committee of Subsea Technology of ISSC from 2015 to 2018 and Committee of Risers and Pipelines of ISSC from 2012 to 2015. Neil Bose is the Vice President (Research) at Memorial University, Newfoundland and Labrador‟s University. Previously he was Principal of the Australian Maritime College (AMC), a specialist institute of the University of Tasmania, and a Professor of Maritime Hydrodynamics. Neil obtained his B.Sc. in Naval Architecture and Ocean Engineering from the University of Glasgow in 1978 and his Ph.D. also from Glasgow in 1982. He came to AMC in Tasmania in May 2007 as the Manager of the Australian Maritime Hydrodynamics Research Centre. His personal research interests are in marine propulsion, autonomous underwater vehicles, ocean environmental monitoring, ocean renewable energy, ice/propeller interaction and aspects of offshore design. Neil Bose is an ocean engineer and naval architect with an international reputation in marine propulsion built up through close collaboration with international industry.

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 94 – 112


International Journal of Robotics and Automation (IJRA) Vol.9, No.2, June 2020, pp. 113~122 ISSN: 2089-4856, DOI: 10.11591/ijra.v9i2.pp113-122

113

Dynamics of trunk type robot with spherical piezoelectric actuators Aistis Augustaitis1, Vytautas Jurėnas2 1

Department of Informational Technologies, Vilnius Gediminas Technical University, Lithuania 2 Institute of Mechatronics, Kaunas University of Technology, Lithuania

Article Info

ABSTRACT

Article history:

Trunk type robots (TTRs) are exclusive. These robots can provide a high level of maneuverability and have a potential in medicine or high risk zones. TTRs are determined as a long serial linkage of similar segments. They are usually connected using tendons or small actuators. A spherical actuator is the most appreciable option. The motion of real spherical actuator (RSA) can be easily obtained applying an inverse piezoelectric effect. It has three independent spinning axes. These axes are perpendicular to each other despite the history of excitation. Kinematics and dynamics of RSA almost have no basics regardless of mentioned features. This situation can be explained according to common disadvantages of other SAs: sophisticated structure and complex control. The structures and abilities of TTRs are reviewed in the first section of this article. At the beginning of the fourth section the kinematics of piezoelectric TTR with two different RSAs is introduced. Its results of inverse dynamics using Euler-Lagrange equations are presented at the end of the fourth section. Similar results are derived using an analytical-potential method in the fifth section. It is quite accurate and effective option to determine inverse dynamics of the TTR employing an analytical-potential method.

Received Feb 7, 2020 Revised Feb 24, 2020 Accepted Mar 10, 2020 Keywords: Euler-lagrange dynamics Kinematics Potential torque Spherical piezoactuator

This is an open access article under the CC BY-SA license.

Corresponding Author: Aistis Augustaitis, Department of Informational Technologies, Vilnius Gediminas Technical University, Saulėtekio al. 11, Vilnius 10223, Lithuania. Email: lsntl@ccu.edu.tw

1.

INTRODUCTION A trunk type robot (TTR) is a serial chain of links. They can be connected with tendons or flexible and rotary actuators. A real spherical actuator (RSA) has three perpendicular spinning axes and this state is constant despite the history of excitation. A spherical actuator usually has very sophisticated structure and is hard to control accurately. Consistently a RSA is very unpopular in practical and theoretical cases. Grasping abilities of TTR are investigated by Li, Teng, Xiao, Kapadia, Bartow and Walker in 2D workspace. The robot is autonomous and consists of flexible pneumatic actuators, but has no end effector and closed loop system too. The numerical operations for target detection and path determination take only 0.2 s. The time of manipulation ranges from 30 to 130 s seeking to avoid unwelcome oscillations because of low stiffness and high inertia [1]. A pneumatic robot is introduced by Mahl, Hildebert and Sawodny. It consists of 3 serial stages and a three-fingered end effector. Three evenly placed flexible hoses in circumference direction are used to control one stage. Every stage can be bended independently. A total bending moment grows up from the end point to the first link of the robot, so its cross-sections are enlarged in the same way [2].

Journal homepage: http://ijra.iaescore.com


114

ISSN:2089-4856

Nagarajan, Kumar and Kanna are made a direct kinematic research for a TTR with tendons using ADAMS and ANSYS modeling systems. They have selected to explore the robot with four independent sections. Every section has 6 serially connected disks and universal joints between them. One tendon is able to transfer the force only in one direction. Therefore two tendons are needed to ensure an actuation of single DOF. Four evenly posted tendons are used to actuate a single section. Four evenly posted springs has been employed for every pair of disks to ensure the stiffness of the robot. The authors have proved again that deformations between the disks of the same section are distributed uniformly [3]. A special design of soft robot to mimic the motions of elephant trunk or snake slithering is generated. The robot is a combination of few sections. Each section is a group of tubular and helical segments. They are made from the strips using ionic polymer-metal nanocomposites (IPMCs). This type of segments are able to perform not only various actuations (linear expansion, contraction, bending and twisting), but energy harvesting and motion sensing too [4]. Jones and Walker have presented a kinematic theory for a TTR with flexible links. It can be actuated employing different air pressures in gofferred hoses or different lengths of tendons too. The estimation of position and orientation and a brief report about its singularities are described comprehensively. The local parameters of pressures or lengths and their first or second derivatives can be estimated according to inverse Jacobian matrix [5]. Chung, Rhee, Shim, Lee and Park have analyzed door opening robots. These robots can be used in emergency situations to shift humans. These authors have offered a new conception for that kind of robots. They suggest a manipulator with three fingers and the force sensors at their tips as an alternative for a wrist sensor. It has a high price and a complex structure too. A tested robot can be separated into 3 motion stages: omnidirectional chassis, manipulator with 6 degrees of freedom (DOFs) and an end effector with 3 fingers. These stages are actuated by 14 servomotors. 16 sonar, 16 infrared sensors and a laser range finder are installed to enhance the features of robot vision [6]. A TTR of serially connected tubes with inclined end planes at one side is presented by Salomon and Wolf. It is able to perform 16 degrees of freedom. The same type planes of the tubes are conjugated together. A revolute actuator goes after cylindrical-incline actuator and so on. This robot provides high level features like stiffness and position accuracy. It possesses these dimensions: diameter of tube is 7.7 cm, total length−80 cm and a maximum deviation angle180°. A maximum traverse force at the end of the robot should be less than 25% of its own weight [7]. The abilities of motion are investigated and tested by Liljebäck, Haugstuen and Pettersen for a snake type robot (STR). This robot is composed of 10 spherical segments. They are serially connected with short flexible links. A STR has a similar structure according to a TTR. The rotating frame with crosswise oriented wheels around it is attached to every segment. This way the action of moving straight and transversely is ensured for every segment. A proper control of the robot is achieved using an internal function ode45 in MATLAB software. A combined task of STR is to detect a straight cursor line, change its direction and crawl along the line. The task has been successfully implemented using a novel stabilization algorithm [8]. Kelasidi, Jesmani, Pettersen and Gravdahl have introduced a multi-objective optimization framework. This method is applied to investigate the efficiency of locomotion for snake type robots employing a weighted sum method. A Particle Swarm Optimization (PSO) is used for different set of weight factors. The consumed power of the robot and its forward velocity are selected as the main parameters of research. Consistently an improved energy efficiency of locomotion associates with the decrement of forward velocity [9]. Important processes of redundant robots like motion, recognition of objects, fetching and safe interface with humans have been widely investigated by Luo and Kuo. A service-oriented multiagent system (SoMAS) is used to control and analyze the robot with cyber-physical system (CPS). The results of mentioned operations approve the effectiveness of integrated systems regarding to high position accuracy and fast speed. Other high quality factors of the robot in comparison to the results of other researchers are mentioned too [10]. Several piezoeletric robots are presented in Figures 1 and 2. They can be named as trunk type robots and have been made in Mechatronics Institute of KTU. Piezoelectric TTRs are special because of small dimensions (total length usually is less than 20 cm) and high ratio of output power to mass. They possess a simple structure and a high accuracy of position too. These TTRs can be significant in medicine or high risk zones. An extra freedom to manoeuvre is achieved employing a piezoelectric spherical actuator with 3 independent spinning DOFs. A piezoelectric tube Figure 3 and a hollow ball are the main structural parts of the actuator. The ball is a passive link of aluminium, steel or other material with high Young modulus. Several layers of distinct materials are a possible structural option of passive ball too. Single piezoelectric tube has to be modified in order to provide three permanent spinning axes. Therefore an internal electrode Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 113 – 122


Int J Rob & Autom

ISSN: 2089-4856

115

of the tube is separated into three equal sections. Driving teeth are employed to define accurate contact zones with the ball. Control techniques for multi-axis piezoelectric actuators are already defined [11]. The aim of this work is to determine inverse dynamics of TTR with two different spherical actuators. This robot has a maximum number of DOFs for 3D manipulation. The determination of robot dynamics also includes the estimation of Coriolis and centrifugal forces. These forces are hard to evaluate in analytical way. According to that is selected to use a universal and effective method of Euler-Lagrange equations and an analytic-potential model too. The results of preferred methods are almost equal, but their operational speeds are different.

Figure 1. Piezorobot with 6 DOFs

Figure 2. Piezorobot with 15 DOFs

Figure 3. Structure of modified piezoelectric tube

2.

REVIEW OF EULER-LAGRANGE DYNAMICS Euler-Lagrange (E-L) method is one of the most universal and simplest ways to solve inverse dynamics of mechanical system. It lets use different coordinate systems (CSs) and consists of the highest integral members. Those are potential Pi and kinetic Ti energies of the links. E-L method is based on differential approach. Therefore it can provide all possible action and reaction forces or torques. It leads to a final form (1) [12]. ( ) ̈

(

̇) ̇

( )

(1)

where: f – vector of total torques for actuation of the DOFs; ̇ – vectors of local robot coordinates and their first derivatives respectively; ( ) ̈ −matrix of torques to evaluate inertial properties; ( ̇ ) ̇ −matrix of torques to estimate the effects of friction, damping and the influence of centrifugal and Coriolis forces also; g(q) – matrix of torques to evaluate the effect of gravity and so on. E-L method can be employed to determine the dynamics of systems with flexible links [13] and to implement various restrictions for its investigation too [14]. This method is selected to evaluate the torques of robot DOFs at picked time moments. The dependencies of the torques on time can be approximated too. General Euler-Lagrange equation is presented in (2). Consistently a special variable La called Lagrangian is needed to estimate (3) [15]. A new and more comprehensive formula of E-L (4) can be written using the special features of Lagrangian. Kinetic energy of spinning link i is evaluated by (5) and potential energy of the same link is determined by (6). (2) ̇ ∑

(3)

Dynamics of trunk type robot with spherical piezoelectric actuators (Aistis Augustaitis)


116

ISSN:2089-4856 (4) ̇ (

) ⁄

(5) (6)

where:

,

– a single derivation on time and a partial derivation on local coordinate respectively;

La−substitute of mechanical energy for investigated system (Lagrangian); t – time; Ti, Pi – kinetic and potential energies of link i respectively; Trace()−sum of elements on the main diagonal of matrix; −matrix of angular speed projections to the base CS regarding to link i; , mi, hi – moment of inertia, mass and center height of the link in accordance to the grounded CS; g – acceleration of gravity and so on. The speed of any robot point can be estimated employing (7). The derivative of single orientation matrix can be determined using (8). [

]

̇

(7)

̇ ̇

̇

()

where: −global speed vector of selected link-point i; , −global and initial position vectors of the link-point respectively; −global orientation matrix of the link, which directly associates with point i; −single orientation matrix from the first CS to the grounded CS; −local coordinate (angle) of link i and so on.

3.

PARAMETERS OF DYNAMICAL TASK A TTR with two different spherical actuators Figure 4 is selected to explore. A preferred spherical actuator consists of a modified piezoelectric tube with three evenly distributed internal electrodes in circumferential direction and a spherical segment. Three longitudinal zones of the tube can be excited independently, but generated displacements are partially dependent because of a monolith structure. The kinematical lengths of the first (OO1) and second (O1O2) robot links are chosen to be equal to 40 mm. The first link of the TTR is compounded of piezoelectric tube and two cut balls. External radiuses of the balls from steel are equal to 10 mm and their internal radiuses-9 mm. An external diameter of the tube is 10 mm, its internal diameter and length are 8 and 24 mm respectively. The second link of the TTR is made of piezoelectric tube too. Its length is 31 mm and it has the same cross-section dimensions. The tubes are made of PZT-4, which density is taken as 7.6 g/cm3. The density of steel is selected as 7.8 g/cm3. Therefore the masses of the first and second links are 21.7 g (9) and 6.7 g (10) respectively. ( (

(

)

(

))

(

)

)

(9)

(10)

where: , −masses of the first and second links respectively; −Archimedes’ constant; R, r-external and internal radiuses of the ball; D, d – external and internal diameters of piezoelectric tube; ρst, ρPZT-densities of steel and PZT-4 respectively; , , −lengths of the first and second piezoelectric tubes respectively. A reference point of the grounded robot coordinate system (CS) is marked as O. A reference point of the first link CS is noted as O1 and the point of the second link CS – as O2. An axis z0 is parallel to the axis of gravity, an axis z1−to a longitudinal symmetry axis of the first link and an axis z 2-to the symmetry axis of the second link. The first and second coordinate systems of the robot are rigidly attached to the first and second links respectively. A reference configuration of the robot can be notified as initial, when the same type axes are made parallel to each other (x0-x1-x2, y0-y1-y2 and z0-z1-z2). The tube of the first spherical Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 113 – 122


Int J Rob & Autom

ISSN: 2089-4856

117

actuator is its primary link with direct control to its secondary link. The tube of the second spherical actuator is its secondary link with indirect motion. It provides two actions at once: excitation and movement. The orientations of tube axes are permanent, because its geometry is static. Therefore the first spherical actuator can be referenced as direct type and the second spherical actuator-as inverse type [16] with rotational axes x2-y2-z2 placed on O1 reference point. This repetition of O2 coordinate system is important to ensure the rightness of actuating. Consistently evaluation methods of orientations for movable links are going to be different.

Figure 4. TTR with two different spherical actuators Seeking to properly determine an orientation matrix of the selected link is important to evaluate its actuation sequence. To realize a hybrid rotational axis and to ensure a proportional excitation of selected DOFs is a very complicated technical task, because of finite stiffness of the ball, friction dependency on speed, losing contact pair or the action of external torque and so on. The manipulation of real spherical actuator can be executed more accurately only if its DOFs are going to be activated separately. This statement can be proved regarding to a simpler control technique, which requires a lower knowledge of experimental testing. To implement casual orientations for robot links are picked to spin the first link about x0 and y0 axes by angles 0.12 and 0.15 rad respectively and to spin the second link about x 2 and y2 axes by angles -0.1 and -0.15 rad respectively. To realize a next robot configuration is selected to activate only x type of axes. Every angular speed dependence of x type axes on time is chosen to be a triangle type Figure 5. Amplitudes of global angular speeds for axes x0 and x2 are selected to be 0.1 and -0.05 rad/s respectively. Both DOFs are going to be started at the same time and their durations are 2 s. The maximum sizes of local angular speeds for movable links are equal to their magnitudes of permanent accelerations (0.1 and -0.15 rad/s2 respectively). The last parameters can be easily estimated and are important for estimation of kinetic torques in robot dynamics employing an analytical way. A center of link mass is taken as a middle point of its longitudinal symmetry line, because each robot link is a rod type. A mass moment of inertia in accordance to parallel axes of body is presented in (11). The first link moment of inertia about x1 or y1 axes can be evaluated by (12) and is 16.6·10-6 kg·m2. Its moment of inertia about z1 axis is determined by (13) and is equal to 1.4·10-6 kg·m2. The second link moment of inertia about x2 or y2 axes can be estimated with (14) and is 4.6·10-6 kg·m2. Its moment of inertia about z2 axis is determined by (15) and is equal to 0.6·10-6 kg·m2. (11) ( (

(

)⁄ ⁄

( (

) )

) )

(12)

(13) (14)

Dynamics of trunk type robot with spherical piezoelectric actuators (Aistis Augustaitis)


118

ISSN:2089-4856 (

)⁄

(15)

where: Ic_a, Ip_a – mass moments of inertia in accordance to center axis and parallel moved axis of body respectively; m – body mass; d – normal distance between the axes of inertia; , - mass moments of inertia about x axes of the first and second links respectively; − mass of the first link tube (5.2 g); −distance between inertial axes of the first link (20 mm); −mass of the hollow ball (8.3 g); −kinematical length of the first link (40 mm); −distance between inertial axes of the second link (24.5 mm) and so on. The first link of the TTR is able to turn only about the axes of coordinate system O. This means that any spinning axis of the CS is going to be a hybrid one in accordance to principal axes of the first link. An analytic-geometric approach can be selected to evaluate a mass moment of inertia for free chosen axis (MIFCA). A freely selected axis OL of inertia has to cross a mass center of body K and be expressed by three spatial angles α, β, γ in accordance to its principal axes x, y and z Figure 6. A general formula of the MIFCA is introduced in (16). Centrifugal moments Iyz, Ixz and Ixy of the body are equal to 0, because it is fully symmetric and isotropic. Now it is important to mention an orientation matrix (17) of body K according to the base CS. Then mass moments of inertia for the body can be apparently determined regarding to the main CS axes (18-20). If the moment of robot link is estimated in accordance to earlier spherical joint, then it is important to evaluate the additional inertial quantity of shifted axes.

Figure 5. Shape of angular speed dependence on time (

)

(

[

)

(

)

]

Figure 6. Mass moment of inertia for free selected axis

(16)

(17)

(

)

(

)

(

)

(18)

(

)

(

)

(

)

(19)

(

)

(

)

(

)

(20)

where: −mass moment of inertia for body K regarding to a selected axis OL; α – angle between a spinning axis OL and a principal axis x of the body; Ix, Iy, Iz – principal moments of inertia for body K; , , −projections of principal axes (x, y and z) to a base axis x0; , , −mass moments of inertia for the body according to the main CS axes (x0-y0-z0) and so on.

4.

APPLICATION OF EULER-LAGRANGE DYNAMICS An initial orientation of the first link is estimated by (21) using Roll-Pitch-Yaw method and the orientation of the second link-by (22) [17]. The last equation can be proved employing orthogonal and transposition features of matrices. An orientation of the first link at the first and second halves of actuation can be evaluated by (23 and 25) respectively. An orientation of the second link at the first and second halves of actuation can be determined by (24 and 26) respectively. In order to estimate a TTR control function of any DOF is selected to use three main time points (initial, medium and final) of its excitation period. Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 113 – 122


Int J Rob & Autom

ISSN: 2089-4856

119

Local speed dependencies on time of the first link can be determined employing these forms: ( ) ( ) ( ) , , . The speed dependencies of the second link can be written using the same way. Global speeds of the first and second halves regarding to the first link are estimated by (27-28). The speeds of the first and second halves according to the second link are evaluated by (29-30). (21) (

(22)

)

(

)

(

⁄ )

(

)

(

)

(

)

(

⁄ )

(

)

(

)

(

)

(

)

(

)

(

) ⁄ )

(

)

(

)

(

)

(

⁄ ⁄ (

( ( (

)

(

)

⁄ (

) ⁄ )

(

)

(

(

( ) ) ⁄ )

(

⁄ ) (24) (25) (

)

(26)

(27)

)

(

(

(

(23)

)

))

(

)

(

(

(28)

)

(29)

)) (30)

⁄ (

)

(

(

))

where: -single orientation matrix about axis x; -transpose matrix of ; -orientation matrix from the second CS to the first CS; , -variables of angular accelerations about x axes for the first and second links respectively; , -selected angular movements of the first link about x0 and y0 axes (0.12 and 0.15 rad respectively); , -angular speeds of the first link at the first and second halves respectively; , -amplitudes of global angular speeds about axes x0, (replaced) x2 respectively and so on. A total kinetic energy of the system is estimated employing (31). Kinetic energy of the second link is determined by (31) and kinetic energy of the first link-by (33). A mass moment of inertia for the first link about axis x0 is evaluated by (34) and the moment of the second link about x0-by (35). (31) (

) (

⁄ )⁄

(

) (

(32) ⁄ )⁄

(

(

))

(

(

))

(

(

))

( (( (

( )

)) )

(33) (

(

(

( (

(

)

))

(34)

)) ) )

(35)

where: T1, T2 – kinetic energies of the first and second links respectively; I1_x0, I2_x0-mass moments of inertia of the first and second links respectively in accordance to x 0 axis and so on. The total potential energy of the robot is estimated with (36). The potential energies of the first and second links are determined by (37-38) respectively. (36) Dynamics of trunk type robot with spherical piezoelectric actuators (Aistis Augustaitis)


120

ISSN:2089-4856 (

(

)

(

(

)

⁄ )

(37) (

)

⁄ )

(38)

where: P1, P2-potential energies of the first and second links respectively; L2-kinematical length of the second link (40 mm) and so on. Dynamical results are estimated and presented in Table 1 and Figure 7 in regard to Euler-Lagrange method and selected parameters of motion. Table 1. Dynamical results of the TTR Time moment (s)

Quantity

M1 (Nm)

M2 (Nm)

0

Total Potential Kinetic

11,12·10-4 11,08·10-4 3,781·10-6

-2,844·10-4 -2,837·10-4 -6,697·10-7

0,33 0,67

Potential Potential

11,62·10-4 13,30·10-4

-3,006·10-4 -3,532·10-4

Total Potential Kinetic Potential Potential

15,89·10-4 16,00·10-4 -11,34·10-6 18,67·10-4 20,30·10-4

-4,350·10-4 -4,370·10-4 2,009·10-6 -5,188·10-4 -5,680·10-4

Total Potential Kinetic

20,82·10-4 20,82·10-4 0

-5,836·10-4 -5,836·10-4 0

1,00 1,33 1,67 2,00

where: M1, M2-total torques about x axes of the first and second spherical actuators respectively Figure 7. Dependencies of total torque on time 5.

DYNAMICS OF ANALYTICAL-POTENTIAL METHOD Analytical-potential method has been selected to assure the correctness of described E-L algorithm and its results. The orientations of robot links at the initial, medium and final configurations are determined by (23-26) too. The control of the first link is based on the main coordinate system O. Its z 0 axis is coincident with a gravitational axis. Therefore an evaluation of potential torque for the first SPA is quite simple (39). A value of torque about x axis is rated as positive, when the direction of links y projection goes from y axis to z axis (upward in the main quarter of y–z plane or downward in -y–z plane). Every single magnitude of potential torque should be multiplied by -1 in order to realize an opposite direction of gravitational force or the sense (negative to positive) of its elbow. (

(

)

(

(

)

(

)

⁄ ))

(39)

−torque about x0 axis because of gravity. To evaluate a potential torque of the second link firstly the orientation of spinning axis x 2 to the global area x0-y0 has to be determined. This can be done employing arctangent function (40-42), which depends on projections senses of mentioned axes Figure 8. Then an apparent torque (43) of the link and its orientation (44-46) to global area x0-y0 have to be estimated. Finally a perpendicularity of apparent torque to axis x2 can be evaluated and a perpendicularity of the same axis to a gravitational axis z 0 too. The last two steps are essential in the estimation of the torque about a replaced axis x 2 (47). where:

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 113 – 122


Int J Rob & Autom (

if

(

(

( (

)

)⁄

(

)), if

( ( )⁄ and ( )

)

(((

if

ISSN: 2089-4856

(

(

(

))

(

)⁄

(

(

(

)⁄

(

)

)), if (

( (

)

and

(

)

(40)

)), or

)⁄

( ( )⁄ and ( ) (

(

(

)) )

)), if (

( (

(

)

(

)

(41)

) )

(

and

)

(42)

⁄ ) and

(43) (

)

(44)

)), or (

)), if

) ((

(

( (

))

(45)

) ) (

121

(

and (

)) )

)

(46)

(47)

where: −arctangent function to estimate the direction (i-mark for sense of orientation and j-index for applicable function); N2_x, M2_x – apparent and real torques about a replaced axis x2 because of gravity and so on. Numerical and graphical results are generated Table 2, Figure 9 in accordance to a recently described method of inverse dynamics. It is based on the action of gravity regarding to the adequate parameters of motion. Table 2. Results of the analytic-potential research

Figure 8. Projection of robot link or axis in area x0-y0

Time moment (s) 0 0,33 0,67 1,00 1,33 1,67 2,00

M1*, M2*: potential torques about x and second spherical actuators respectively

axes

M1*(Nm) 11,08·10-4 11,62·10-4 13,30·10-4 16,00·10-4 18,67·10-4 20,30·10-4 20,82·10-4

of

the

M2*(Nm) -2,837·10-4 -3,006·10-4 -3,532·10-4 -4,370·10-4 -5,188·10-4 -5,680·10-4 -5,836·10-4

first

Figure 9. Dependencies of potential torque on time 6.

CONCLUSION The results of Euler-Lagrange dynamics are partly confirmed by the results of analytic-potential method. Kinetic torques of E-L method can be confirmed by theoretical way too. Actuators dependencies of total torque on large time steps can be approximated as linear graphs. Actuators dependencies of potential torque on smaller time steps are similar shapes to links dependencies of position. Kinetic torques are more than 100 times smaller than potential ones regarding to adequate input parameters. The calculation with MATLAB software of 14 potential torques according to E-L method lasts about 2.8 s and an analogous evaluation regarding to analytic-potential method takes about 3.1 s. The estimation with the same software of 6 potential and 6 kinetic torques in regard to Euler-Lagrange method lasts about 5.3 s. A casual E-L method can be simplified or changed into the analytical-potential method seeking to improve coding speed and to accelerate numerical operations of dynamical analysis for a gently trunk type robot. Dynamics of trunk type robot with spherical piezoelectric actuators (Aistis Augustaitis)


122

ISSN:2089-4856

ACKNOWLEDGEMENTS The work was supported by the Research Council of Lithuania under the project SmartTrunk, No. MIP-084/2015. REFERENCES [1] [2] [3]

[4] [5] [6] [7] [8] [9] [10]

[11] [12] [13] [14]

[15] [16] [17]

J. Li, et al., “Autonomous Continuum Grasping,” International Conference on Intelligent Robots and Systems, pp. 4569-4576, 2014. T. Mahl, et al., “Forward Kinematics of a Compliant Pneumatically Actuated Redundant Manipulator,” IEEE Conference on Industrial Electronics and Applications, pp. 1267-1273, 2012. A. Nagarajan, S. K. R. Kanna and V. M. Kumar, "Multibody dynamic simulation of a hyper redundant robotic manipulator using ADAMS ansys interaction," International Conference on Algorithms, Methodology, Models and Applications in Emerging Technologies (ICAMMAET), pp. 1-6, Chennai, 2017. S. E. Tabatabaie and M. Shahinpoor, “Artificial Soft Robotic Elephant Trunk Made with Ionic Polymer-Metal Nanocomposites (IPMCs),” International Robotics & Automation Journal, vol. 5, no.4, pp. 138-142, 2019. B. A. Jones and I. D. Walker, "Kinematics for multisection continuum robots," in IEEE Transactions on Robotics, vol. 22, no. 1, pp. 43-55, Feb. 2006. W. Chung, C. Rhee, Y. Shim, H. Lee and S. Park, "Door-Opening Control of a Service Robot Using the Multifingered Robot Hand," in IEEE Transactions on Industrial Electronics, vol. 56, no. 10, pp. 3975-3984, Oct. 2009. O. Salomon and A. Wolf, “Inclined Links Hyper-Redundant Elephant Trunk-Like Robot,” Mechanisms and Robotics, vol. 4, no. 4, 2012. P. Liljeback, I. U. Haugstuen and K. Y. Pettersen, "Path Following Control of Planar Snake Robots Using a Cascaded Approach," in IEEE Transactions on Control Systems Technology, vol. 20, no. 1, pp. 111-126, Jan. 2012. E. Kelasidi, et al., “Locomotion Efficiency Optimization of Biologically Inspired Snake Robots,” Applied Sciences, vol. 8, no. 1, 2018. R. C. Luo and C. Kuo, "Intelligent Seven-DoF Robot With Dynamic Obstacle Avoidance and 3-D Object Recognition for Industrial Cyber–Physical Systems in Manufacturing Automation," in Proceedings of the IEEE, vol. 104, no. 5, pp. 1102-1113, May 2016. R. Bansevičius and V. Kargaudas, “Attitude Control of Micro-and Nanosatellites Using Multi-Degree-of-Freedom Piezoelectric Actuators,” Vibration problems ICOVP, vol. 139, pp. 379-384, 2011. J. Iqbal, “Modern Control Laws for an Articulated Robotic Arm: Modeling and Simulation,” Engineering, Technology & Applied Science Research, vol. 9, no. 2, pp. 4057-4061, 2019. Chunxia Z., et al., “Effect of Links Deformation on Motion Precision of Parallel Manipulator Based on Flexible Dynamics,” Industrial Robot, vol. 44, no. 6, pp. 776-787, 2017. D. Dopico, et al., “Direct Sensitivity Analysis of Multibody Systems with Holonomic and Nonholonomic Constraints via an Index-3 Augmented Lagrangian Formulation with Projections,” Nonlinear Dynamics, vol. 93, pp. 2039-2056, 2018. D. Baleanu, et al., “The Motion of a Bead Sliding on a Wire in Fractional Sense,” Acta Physica Polonica A, vol. 131, vol. 6, pp. 1561-1564, 2017. A. Augustaitis, et al., “Kinematics of Trunk-Like Robots with Piezo Actuators,” Mechanics, vol. 24, no. 2, pp. 254-259, 2018. Y. H. Hwang, et al., “An Electrorheological Spherical Joint Actuator for a Haptic Master with Application to Robot-Assisted Cutting Surgery,” Sensors and Actuators A: Physical, vol. 249, pp. 163-171, 2016.

BIOGRAPHIES OF AUTHORS Aistis Augustaitis received BS degree in Transport Engineering from Kaunas University of Technology (KTU) in 2014, MS degree in Mechanical Engineering from KTU in 2016. Currently he is studying for PhD in Mechanical Engineering at Vilnius Gediminas Technical University (VGTU). His current research interests include the trajectory planning and control of trunk-type robots with emphasis on numerical analysis.

Vytautas Jurėnas graduated from Kaunas University of Technology (KTU) in 1979 and received degree in Mechanical Engineering. He received PhD from KTU in 1993 in the field of Mechanical Engineering. He is a senior researcher in KTU Mechatronics Institute. His research interests include piezomechanics and diagnostics of mechanical systems.

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 113 – 122


International Journal of Robotics and Automation (IJRA) Vol.9, No.2, June 2020, pp. 123~134 ISSN: 2089-4856, DOI: 10.11591/ijra.v9i2.pp123-134

123

Control system design of duct cleaning robot capable of overcoming L and T-shaped ducts Myeong In Seo, Woo Jin Jang, Junhwan Ha, Kyongtae Park, Dong Hwan Kim Department of Mechanical design and robot engineering, Seoul National Univ. of Science and Technology, Korea

Article Info

ABSTRACT

Article history:

This study introduces the control method of duct cleaning robot that enables real-time position tracking and self-driving over L-shaped and T-shaped duct sections. The developed robot has three legs and is designed to flexibly respond to duct sizes. The position of the robot inside the duct is identified using the UWB communication module and the location estimation algorithm. Although UWB communication has relatively large distance error within the metal, the positional error was reduced by introducing appropriate filters to estimate the robot position accurately. TCP/IP communication allows commands to be sent between the PC and the robot and to receive live images of the camera attached to the robot. Using Haar-like and classifiers, the robot can recognize the type of duct that is difficult to overcome, such as L-shaped and T-shaped duct, and it moves successfully inside the duct according to the corresponding moving algorithms.

Received Dec 17, 2019 Revised Feb 13, 2020 Accepted Mar 4, 2020 Keywords: Autonomous Driving Duct Cleaning Robot Image Processing Location Identification

This is an open access article under the CC BY-SA license.

Corresponding Author: Dong Hwan Kim, Department of Mechanical system design engineering, Seoul National University of Science and Technology, 232 Gongneung-ro, Nowon-gu, Seoul 01811, Korea. Email: dhkim@seoultech.ac.kr

1.

INTRODUCTION In buildings and kitchens, air is supplied or circulated through ducts with various facilities. Smoke and dust contained in the air passing through the duct in the enclosed space accumulate in the ducts and build up more in the flexions. These cause indoor air pollution and should be cleaned regularly. Especially, in case of kitchen ducts, dust and oil stain deposits prevent circulation or cause fire accidents due to high temperature air. Thermal efficiency of heating and cooling ducts for air conditioning and heating decreases when dust builds up. Duct cleaning are largely classified as dry and wet [1] methods. Since it is virtually impossible for humans to clean ducts themselves, the introduction of a duct-cleaning robot is required. The shape of the robot can be determined according to the method of cleaning the duct. Due to the nature of ducts, especially in the closed space, wire-twisting problems occur when using a wire driven robot. Thus, it is necessary to build a wireless-driven robot and develop a robot that can move autonomously along the ducts. At the same time, the location of the robot needs to be identified while moving along complex duct structure in order to monitor the operation. Meanwhile, existing duct cleaning robots [2-4] are difficult to cope with when the duct diameters vary by location. In general, air conditioning ducts are concealed and installed inside the ceiling. As the shape and dimensions of the air conditioner from the facility to the inlet and outlet of each chamber change, the flexions are formed, and the successful passing these flexions for the robot should be carried out even with flexions. Typical flexion is either L or T-shaped and the robot should be able to pass through this non-uniform shaped ducts. Journal homepage: http://ijra.iaescore.com


124

ISSN:2089-4856

The duct-cleaning robot introduced in this work is similar to the existing duct-cleaning robot [5-6], but it is designed with a three-legged (it looks like three tracks in appearance) robot that is more suitable for circular ducts, thereby allowing stable movement on the walls inside the curved ducts. It can also be wirelessly operated to prevent wire twisting, which occurs frequently inside long and winding ducts. Unlike conventional duct-cleaning robots, it is designed to proceed autonomously using PSD (Position Sensitive Device) sensors and cameras without being manipulated by users. The PSD sensor measures the distance to edges of L or T-shaped duct and the camera is used to distinguish between the L and T-shaped duct for self-driving. The user just controls the start button and the stop button and views the images inside the duct to check the robot's current location and cleaning status. Images of the duct inside the enclosed space are sent wirelessly to the user's PC via Wifi through the embedded PC, Raspberry Pi. In this study, UWB modules were installed on the outside of the duct to estimate the current position of the robot in the duct and the correct identification of T-shaped duct using the classifier algorithm based on the acquired images to enable autonomous driving is performed.

2.

SYSTEM CONFIGURATION AND INTERFACE When the pulley radius is and the angular velocity of each motor for driving the leg is , the robot center travel velocity can be obtained as the product of the pulley radius and the angular velocity. Using these variables, the robot center velocity can be determined according to the each case such as three wheels are at the same speed, one wheel stops and the other two wheels drive, only one wheel is driven, and three wheel speeds are different. Once the three wheels are at the same speed, the velocity of the robot is = = = . In case of different velocity on three legs, the velocity of the central body is expressed as velocity gradient seen from the pipeline side as shown in Figure 1. Thus, the robot body velocity is determined by the average of angular velocities from three motors. Figure 2 shows the appearance of the manufactured duct cleaning robot. The front of the robot is equipped with a camera while the rear is equipped with a brush for dusting. The robot is driven by the user's starting signal from the PC, which is sent to the micro controller Arduino from the Raspberry Pi through TCP/IP communication. The PSD sensor connected to the Arduino measures the diameter of the duct, which is important information for autonomous driving in the duct. In addition, the camera connected to the Raspberry Pi continuously captures images inside the duct and sends the results to the PC. At this time, the L and T-shaped structures inside duct that are difficult to detect only by PSD sensors are distinguished through the camera's image processing technologies, and the corresponding movements of the robot according to the recognized duct structure are introduced to help autonomous driving of the robot. Users can continue to observe the inside of the duct by pressing the start button of the PC. In order to know the exact location of the robot inside the duct, UWB anchors installed on the outside of the duct and a UWB tag on the robot body are used. The distance values between anchors and tag are transmitted by Bluetooth communication to the PC and the location of the robot is estimated by the trilateration algorithm on the PC. There are three main categories of control system developed in this work. A UWB module for estimating the robot's real-time position, an Arduino Megaboard for controlling the robot, and a Raspberry Pi Zero for transmitting images via PC and TCP/IP communication to control the robot remotely. Figure 3 is the overall control system configuration of the robot used in the robot system. Arduino controls the motor and LED lights for robot driving and receives the values from the PSD sensors. The UWB module transmits the distance values to the PC via Bluetooth communication. Raspberry Pi transmits images captured by the camera mounted on the robot to the PC via TCP/IP communication and also sends command signals from the PC to Arduino. (1)

Figure 1. Velocity profile at the side view of the pipeline. Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 123 – 134


Int J Rob & Autom

ISSN: 2089-4856

(a)

(b)

(c)

(d)

125

Figure 2. (a) Overall view of duct cleaning robot, (b) assembled robot, (c) and picture of robot running in a duct, (d) drawings of duct cleaning robot

Figure 3. Control system structure 2.1. UWB module configuration As mentioned earlier, UWB communication is used for indoor location measurement, and the DWM1000 [7] used for communication is a CMOS chip for measuring precise distances with an accuracy of 10cm. The Arduino Pro Mini is adopted to operate the UWB module. One tag on the robot and four anchors outside the duct are placed, as shown in Figure 4, and the PCB (Printed Circuit Board) is used to reduce the size of anchor and tag module so that they can be placed inside the robot. Bluetooth modules are also attached to the PC to receive distance values from UWB modules wirelessly.

Figure 4. UWB modules placement for positioning in duct environment Control system design of duct cleaning robot capable of overcoming L and T-shaped ducts (Myeong In Seo)


126

ISSN:2089-4856

2.2. Controller configuration In this work, the ATmega2560 based Arduino Mega was used to acquire the values of the nine PSD sensors which are used to identify the internal shape of the duct. To simplify the circuit, a PCB compatible with the Arduino Mega was created. One DC motor for each wheel and two smaller DC motors for wire control were connected to the DC motor drivers. 2.3. Infrared sensor As stated earlier, PSD sensors are used to identify the structure inside the duct which is crucial for an autonomous driving of the robot. The infrared sensors with a range of 2cm to 15cm can be used to detect structural changes inside the duct while the robot is driving. The Kalman filter can be used to more effectively handle the analog values received through the infrared sensor, since the sensor's signal itself is partly absorbed into the duct internal metal parts, potentially distorting the sensor values. The Kalman filter equations used here are as follows. (2) ̂

(3) (4)

where is the Kalman gain, is the error covariance, is the process noise variance, and is the measurement noise variance. is the measured state variable and is the estimated state variable at k-th step, respectively. Based on the above equations (2-4), we applied the proposed Kalman filter to the micro controller. In Figure 4, the output data before and after the filter passes for the infrared sensor signal were compared. The blue line shows the value of the data before going through the filter, and the red line shows the value of the data coming through the filter. In this case, =0.25, =0.01, =0.09 were used and =1.0 was used as initial value. The experimental results confirmed that the value that came from the Kalman filter was more stable than original signal (Figure 5).

Figure 5. Distance data comparison using Kalman filter (x-axis : time [ms], y-axis : Distance [cm])

3. DISTANCE MEASUREMENT USING UWB COMMUNICATION 3.1. UWB communication UWB (Ultra Wide Band) communication is a wireless communication technology that can guarantee transmission distance of 10m to 1km over a wide frequency range from 3.1 GHz to 10.6 GHz, and has been developed and applied for special purposes by the military, but has recently gained attention as it opened to the private sector. UWB, which transmits large amounts of information at low power, is a new technology with a very wide application range, including indoor observation, sports tracking, intruder detection system, and prevention of heavy equipment vehicle accidents. In this study, the UWB communication and Real-Time Locating Systems (RTLS) were applied to implement a system that could Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 123 – 134


Int J Rob & Autom

ISSN: 2089-4856

127

automatically identify and track the position of objects. The communication modules used for UWB communication are DWM1000 from DECAWAVE and Time of Flight (TOF) method for distance measurement is a main principle of the sensor. Figure 6 is a representation of the TOF-type distance measurement procedure, which is characterized by DS-TWR (Double-sided Two Way Ranging) [8] extending the measurement range of SS-TWR (Single Sided Two Way Ranging), in which two round-trip time measurements are used to obtain the TOF results. Errors are reduced despite significantly long response delays and the TOF is 333 ns in the relatively large UWB operating range of 100 m.

Figure 6. Sequence of TWR using TOF

3.2. Estimating the location of the robot by trilateration measurement To identify the position of the robot inside the duct, a trilateration method was applied to estimate the center position of the robot while it moves around using anchors that are placed outside the duct and a tag on the robot body. Here, the gradient decent method is applied to reduce the error between the actual and the estimated values for the robot location. Figure 7 shows the basic environment for applying the trilateration method to obtain the robot's central position (x, y), using the distance values from four anchors and one tag. The true value of the center position of the robot is constructed by the following formula. (5) where is the number of anchors, and is measured distance between the anchor and tag. Next, is the estimated distance to the robot from anchor is expressed as follow. √

which

(6)

Error between the actual value and the estimated value between anchor and tag is the cost function that minimizes the error at each step can be defined as; ∑

, and

(7)

By using the gradient descent method, the robot's central position is updated in each step as follows.

[

]

(8)

The final update law for the estimated 2D location is determined in (9). ∑ [

]

[

]

[

]

(9)

Control system design of duct cleaning robot capable of overcoming L and T-shaped ducts (Myeong In Seo)


128

ISSN:2089-4856

As shown in (9) is an equation determined by the gradient descent method that calculates spatial coordinates that minimize the distance error between the anchors and the tag. the next point to move from the current coordinate value is calculated, and is a parameter that controls the update speed. When UWB modules are used in a duct made of metal in part, location errors occur due to the fact that the waves do not penetrate the metal parts well but reflect them [9]. To overcome this, the following methods have been tried. We applied general signal filters to incoming data values, added the number of anchors, and supplemented the shortcomings of the gradient descent method. The disadvantage of gradient descent method is that it does not escape the zero slope well and convergence is slow. To solve this problem, the commonly used method is to apply the stochastic gradient decent with momentum method [10]. In other words, the speed constant is corrected to reach the minimum value faster and the inflection point can be smoothly passed over due to the momentum. (10) (11) In (10), represents velocity at k-th step, and η represents a decaying rate. In this case, η=0.9, and =0.95 is used respectively, which showed that convergence was about 10 times faster than before. In addition, the value of is a variable to which the gradient descent method is applied, that is, a coordinate value updated by substituting the x coordinate or the y coordinate obtained through the gradient decent with momentum method can be obtained. Figure 8 shows the results of experiments carried out in metal ducts. The red circles in Figure 7 are the results of the conventional gradient descent method and the blue circles are the results of the gradient descent method considering the acceleration. It can be seen that the error is reduced and closer to the actual positions by employing the gradient decent with momentum method than the gradient decent method. The actual errors are up to about 30 cm for both methods, which are reliable to be adopted to location estimation inside ducts.

Figure 7. Distances between four anchors and one tag

Figure 8. Experimental results of gradient descent method and considering acceleration

4. OPERATION SOFTWARE DESIGN 4.1. Autonomous driving The robot basically moves forward through three motors. When the PC gives commands to the robot through TCP / IT communication, the robot moves or stops. When the robot starts to move, the LED and the brush motor start to work together, and at the same time, the internal shape of the duct is recognized by the distance information of the infrared sensors attached to the front and back of the robot as shown in Figure 9. Infrared sensor values usually come out with a constant value when the robot moves along straight duct. When the curved section is found, the values from the infrared sensors change. Through these values, the robot can be automatically rotated according to the left and right flexion which is called L-shaped duct, or two-way path, called T-shaped duct as shown in Figure 10. The precise schemes for passing through L or T-shaped duct will be explained in the later section.

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 123 – 134


Int J Rob & Autom

ISSN: 2089-4856

Figure 9. IR Sensors placement in robot

129

Figure 10. Robot autonomous driving algorithm using IR sensors only

4.2. Left and right driving in the L-shaped duct When the robot meets the curved section, it operates in the order as shown in Figure 11. First, check the left and right sides through the infrared sensor. When the robot encounters a curved section while moving forward, one of the two sensors has a distance of more than certain distance (here, 12.5cm) as shown in Figure 10, making the robot detect empty space. Then, the robot stops the wheel that becomes the pivot axis and moves the rest of the wheel so that it can turn without heavy resistance inside the duct. After the robot turns, if the value of one infrared sensor is less than about the preset distance (here, for instance 12.5cm), the robot will start moving forward again. In this way, the left and right drive in the L-shaped duct can be automatically carried out.

Figure 11. Sequence of L-shaped duct running

4.3. Overcoming T-shaped duct with Timers In the T-shaped duct, the robot can move forward, left and right. In order for the robot to turn in the T-shaped duct, it must be turned in a different way than the L-shaped duct movement mentioned earlier. When turning on a T-shaped duct, the wheel body will unfold due to the robot's spring structure in the open space. In other words, because one side is open and the wheel cannot reach the wall and the robot's wheels can get stuck. Therefore, for this purpose, the robot must be controlled to escape the section by giving it a reverse turn. In case of using infrared sensors for self-driving, as shown in Figure 12, when the robot meets a T-shaped duct it can detect a side empty space, just like the L-shaped duct, but not a front empty space. Therefore, infrared sensors alone cause the robot to move in the same way as L-shaped duct, which lets the robot get stuck in the bottom of the T-shaped duct. To solve this problem, the accurate recognition of the T-shaped duct should be established. If T-shaped duct is detected through image processing without the use of distance sensors, the robot recognizes that there is a T-shaped duct in front. The detail algorithm to find the T-shape duct by image processing is explained in the following section. When an infrared sensor detects empty space, T1 timer inside MCU starts. And while driving the geared motor, the wire is wound to close the front of the three legs. The robot moves forward until the timer reaches a predetermined time. At this time, the T2 timer (Figure 12) starts, rotating the wheel on the axis of turn reversely, as shown in (4). When the T2 timer reaches a preset time, the T3 timer starts and rotates the robot's top wheel in reverse order to turn the robot. When the T3 timer reaches a predetermined time, all the wheels of the robot move forward and starts self-driving again on straight section. Through Control system design of duct cleaning robot capable of overcoming L and T-shaped ducts (Myeong In Seo)


130

ISSN:2089-4856

the experiments, the times of each timer was found to be about 4.8 seconds, 6.4 seconds and 8 seconds respectively, and it was proved the robot could automatically overcome the T-shaped duct successfully. Based on the experimental results, the setting times of the timers can be adjusted according to the conditions of the duct.

Figure 12. Sequence of T-shaped duct running using timers

4.4. Self-driving in T-shaped duct using image processing As mentioned earlier, the duct types that robot needs to overcome are L-shaped ducts and T-shaped ducts, which are different in the driving control method to overcome the two types. Therefore, the robot's ultimate goal is to accurately recognize and properly overcome the flexions. The use of distance sensors alone may result in significant errors due to misperception at the flexion, resulting in the wrong choice of how to overcome the shape, which may cause it to fall into certain sections or become stuck there. If image information through a camera is supplemented with identification algorithms, it is certainly helpful for the robot to accurately recognize the flexion whether it is L or T-shaped and to choose the right method to overcome the duct shape. If the image information is used, the shapes of the two ducts will be distinguished as shown in Figure 13. Distance sensor values can be similar for both shapes, and in fact, three wheels touch the wall in the L-duct, but in the T-duct, one wheel must be run without touching the wall. Due to the problem of wheel and not being able to operate, additional wire motor was to be used as shown in Figure 14.

(a)

(b)

Figure 13. Image difference between (a) L-shaped and (b) T-shaped duct

(a)

(b)

Figure 14. Robot wire attachment drawing and picture, (a) wire and wire-driving motor, (b) shape of closed front part of robot

There are six wires, two for each leg. These wires are used to fold the front and rear parts of a leg when moving in a T or L –shaped duct. Before the robot enters the duct, the geared motors are activated to fold the legs by winding the wires. In order to distinguish L-shaped duct from T-shaped duct by image, the outline of T-shaped duct edge was selected as characteristic point. It was determined that the two types could be recognized differently depending on whether the outline shown in Figure 15 was detected. At first, we tried to detect the outline, but there were the following problems. a. There were too many contours detected. b. It was difficult to use the desired pixel coordinates. c. There was poor real-time performance because it took a long time to calculate. Therefore, we decided to use the camera image itself rather than the pixel coordinates of the outline, and we used template matching [11]. Template matching has the advantage of obtaining coordinates of matching image from reference image, which can be verified in real time due to shorter computational time, Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 123 – 134


Int J Rob & Autom

ISSN: 2089-4856

131

and the degree of matching with the threshold value can be adjusted. However, there were disadvantages of requiring proper reference images in advance, changing the threshold value every time depending on the environment, and being very susceptible to brightness changes and geometrical rotations. Therefore, the Harr-like features [12] method that does not require a reference image beforehand was used to detect the outline of the image. Using Harr-like features as shown in Figure 16, an object is recognized by combining elementary features that are constructed based on pixel brightness differences in the image. The outline to be detected was found to be suitable for the key point recognition of the T-shape duct because it basically retains the geometric information as shown in Figure 15, and the area has a distinct brightness difference. In order to extract the feature to classify the input image, the difference in brightness of all pixels must be calculated and compared with the thresholds of the elementary features. If we calculate the brightness differences for all pixels, the computational volume becomes very large, so integrated images [13] can be used to quickly calculate the pixel brightness differences in the desired area. This allows the brightness of all areas to be calculated in only four calculations, which enables rapid computation. If the feature of the T-shaped duct is extracted due to the difference in brightness, a classifier is made and compared with each input image to detect the T-shaped duct. For this operation, the functions provided by OpenCV were used with the elementary features and threshold values. To create the classifier, an xml file [14] was made using the Cascade Classifier provided inside the OpenCV. Figure 17 shows the process of creating an xml file as a flow chart and represents the image data used. For the Positive samples, the samples were constructed based on the shapes that required recognition within the duct, and the points to be used as a feature were marked. For the Negative samples, images that had nothing to do with the duct were selected, i.e. backgrounds, animals, or cars. In the early stages of training, the number of samples was tested using 50 Positive and 200 Negative samples respectively, and found that non-T-shaped ducts such as L-shaped ducts or straight ducts were sometimes judged to be a T-shaped duct. To solve this problem, the number of samples for Positive and Negative were increased to 300 and 900, respectively, and the image size was modified so that pixel brightness could be clearly distinguished. Later, after checking the real-time detection, it was confirmed that the T-branch section was relatively accurately detected and overcome the errors of the T-shaped structure recognition by accurately detecting the T-shaped duct, and confirmed that it was recognized not only in the existing environment but also in the new environment. Table 1 shows the results of the experiments according to the number of samples.

Figure 15. Detected T-branch section

Figure 16. Example of elementary features Table 1. Success rate based on number of samples Positive

50

300

Negative

200

900

0%

95%

Results

Success Rate

Figure 17. Sequence of making xml file and image data

Control system design of duct cleaning robot capable of overcoming L and T-shaped ducts (Myeong In Seo)


132

ISSN:2089-4856

5.

EXPERIMENTS AND RESULTS After the robot was manufactured, a lot of tests were conducted for each section to ensure normal driving was possible on straight duct, L-shaped and T-shaped ducts. For straight and L-shaped ducts, robot driving was smooth with image capturing, and brushing with the help of distance sensors and algorithms. In case of T-shaped ducts, the robot could move normally without falling into the duct when it was driven with the front parts of the wheels folded by wires. In some cases, the movement failed in the T-shaped duct, mainly if the wires were not sufficiently tight or image processing was not secured. Table 2 represents the success rate for each duct segment. Using the UWB, many experiments have been conducted to identify the location of moving robot inside the duct made of metal or and plastic depending on the section. For the plastic section, the trilateration algorithm based on gradient descent method guaranteed great accuracy with an error of less than 5cm. However, with the application of trilateration algorithm for metal section, the TOF between anchors and tags varied greatly, preventing accurate positioning. With the introduction of momentum term shown in (9-10) and increase the number of anchors from three to four, the error could be reduced to 30 cm. Table 3 is the UWB position measurement errors according to the material of the duct. Table 4 shows the performance to transmit real time image from the robot’ camera to PC. Due to the performance limitations of Raspberry Pi Zero, the image frame rate had to be lowered. In addition, the video could be instantly viewed by streaming it live on internet. However, due to the internet environment or performance problems with Raspberry Pi, there were slight delay in viewing the video. On the other hand, the camera used in the robot was a small size model with Pi Camera, so it had a low resolution. When using C# program to identify T-shaped duct by image processing stated previously, the identification success rate for T-shape duct was around 95%. Some of the reasons for the failure of the misrecognition of T-shaped duct were that when transparent ducts were used, external light was transmitted into the duct, which resulted in failure of image processing due to difference in illumination inside the duct, and a delay in image transmission due to performance of wireless communication modules. In Figure 18 the images show that the robot operates in the vertical duct that is more difficult than the horizontal duct. Tests showed that the robot was climbing up using the friction force between the belt connected to the driveshaft and inner walls of the duct. Figure 19 shows the images that the robot moves through the L-shaped duct and the robot movements in the T-shaped duct are shown in Figure 20. All moving performance for three types of duct are compared in Table 5. Finally, Table 5 is a measure of driving speed measured according to shape of ducts. The robot's driving speed was moderate enough to clean it up with a brush. As we guess, the robot speed is slowest in the T-shaped duct because it takes longer to recognize the T-shape duct through image processing and to safely overcome it through subsequent timer operations and wire operation. Table 2. Success rates for moving in duct in accordance with duct shapes Duct Types Success Rate

Straight Duct 100%

L-shaped

T-shaped

100%

80%

Table 4. Image processing and performance figures Video Frame 20 FPS

Video Transmission Delay 0 ~ 1.5 sec

Resolution (Pixel) 320 x 240

Imgae Processing Success Rate 95%

Figure 18. Moving scenes of a robot in a vertical duct Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 123 – 134

Table 3. Location errors of robot inside duct in accordance with different materials Empty Space Plastic Duct Metallic Duct

UWB Positioning Error 5 ~ 10 cm 5 ~ 10 cm 30 ~ 50 cm

Table 5. Robot average speed in several circumstances Duct Type Speed(mm/s)

Straight 20

L-shaped 12

T-shaped 5

Figure 19. Moving scenes of a robot in a L-shaped duct


Int J Rob & Autom

ISSN: 2089-4856

133

Figure 20. Moving scenes of a robot in a T-shaped duct

6.

CONCLUSION In this work, we developed a new type of duct cleaning robot that can solve the passing problem of various types of duct shapes. First, the control of the three-legged robot, which can adapt to the duct diameter change, was carried out. Next, the real-time location estimation of the robot was established by applying UWB communication and trilateration scheme so that user could easily identify where the robot is located inside the duct. In addition, the self-driving mode, which used to be a problem for existing robots, was solved by using sensors and image processing to make it easier for ordinary people to manipulate them. Finally, to conveniently identify the overall cleaning situation and location of the robot, the interface was built on the user's PC so that the control of the robot and the monitoring of current situation could be achieved at once. After the robot detected the L-shaped duct in front using infrared sensors, it was possible for the robot to move freely in the L-shaped duct by stopping one leg of three legs (tracks) while the other two legs moving. The robot was also built with wires to pull the robot's front and rear parts of the tracks inwards, allowing the robot to move without falling into the open area even in the T-shaped section. In addition, the T-shaped duct was surly recognized by adopting Harr-like features method and cascade classifier using the camera images. Using the three legs (tracks) mechanism along with the robot's slider link structure and belts with great friction, the robot was able to drive various sections, including L-shaped, T-shaped and vertical duct, and succeeded in autonomous driving using infrared sensors and image processing schemes. The robot worked well inside the circular duct, but there was a disadvantage that it could not operate on the rectangular duct because it is difficult to make the robot fit in inner surface. Nevertheless, the robot's focus is to clean the inside of the duct, but it can also be used for exploration in pipes with similar environments to round ducts.

ACKNOWLEDGEMENTS This study was supported by the Research Program funded by the SeoulTech (Seoul National University of Science and Technology).

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9]

Woo Tea Jung, “Underground History Duct Pollution and Cleaning Technology,” The Society of Air-Conditioning And Refrigerating Engineers of Korea, vol. 41, no. 4, pp. 34-41, 2012. Seung Woo Jeon et al, “Design of an Intelligent Duct Cleaning Robot with Force Compliant Brush,” 2012 12th International Conference on Control, Automation and Systems, JeJu Island, pp. 2033-2037, 2012. Young Sik Kwon et al, “Design and Motion Planning of a Two-Module Collaborative Indoor Pipeline Inspection Robot,” in IEEE Transactions on Robotics, vol. 28, no. 3, pp. 681-696, June 2012. Ya Wang et al, “Autonomous Air Duct Cleaning Robot System,” 2006 49th IEEE International Midwest Symposium on Circuits and Systems, San Juan, pp. 510-513, 2006. Atsushi Kakogawa et al, “An in-pipe robot with underactuated parallelogram crawler modules,” 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, pp. 1687-1692, 2014. Atsushi Kakogawa et al, “Design of an underactuated parallelogram crawler module for an in-pipe robot,” 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, pp. 1324-1329, 2013. Kim, K. H, “UWB RADAR trend of technology,” The Proceedings of the Korea Electromagnetic Engineering Society, vol. 13, no. 3, pp. 52-62, 2002. Jung-ha Kim et al, “A study on indoor navigation system using localization based on wireless communication,” Journal of the Korean Society of Marine Engineering, vol. 37, no. 1, pp. 114–120, January 2013. Joon Hyun Park et al, “A Study on Interference by Obstacles for UWB based Distance Calculation,” Proceedings of Symposium of the Korean Institute of communications and Information Sciences, pp. 984-985, 2018.

Control system design of duct cleaning robot capable of overcoming L and T-shaped ducts (Myeong In Seo)


134

ISSN:2089-4856

[10] Ning Qian, “On the momentum term in gradient descent learning algorithms,” Neural Networks, vol. 12, no. 1, pp. 145–151, January 1999. [11] Wisarut Chantara et al, “Object Tracking using Adaptive Template Matching,” IEIE Transactions on Smart Processing & Computing, vol. 4, no. 1, pp. 1-9, February 2015. [12] Tae Joon Park et al, “Detection of License Plates Using A Cascade or Boosted Classifiers Based on Haar-like Features,” Control, Automation and Systems Domestic Conference Symposium, pp. 631 – 633, 2008. [13] Ki Yeong Park et al, “An Improved Normalization Method for Haar-like Features for Real-time Object Detection,” The Journal of The Korean Institute of Communication Sciences, vol. 36, no. 8, pp. 505-515, August 2011. [14] Mahdi Rezaei, “Creating a Cascade of Haar-Like Classifiers: Step by Step,” ResearchGate Publishing, pp. 1-8, 2013.

BIOGRAPHIES OF AUTHORS Myeong In Seo graduated from Seoul National Univ. of Science and Technology in 2019 and he has been studying graduate course at the same university. His major research areas are image processign, and robotics control.

Woo Jin Jang graduated from Seoul National Univ. of Science and Technology in 2019 and he has been studying graduate course at the same university. His major research areas are robot mechanism design and analysis, and robotics control.

Junhwan Ha graduated from Seoul National Univ. of Science and Technology in 2018 and he has been working at Samsung SDI. His major research areas robotics, and programming.

Kyongtae Park graduated from Seoul National Univ. of Science and Technology in 2018 and he has been working at Hyundai Motors. His major research areas robotics, and design.

Dong Hwan Kim graduated from Seoul National University in 1986, and finished his master degree at the same university in 1988. He had a Ph.D degree from Georgia Tech in 1995. He had been working Dusan Infracore as junior researcher from 1988 through 1991. After Ph.D he had been worked at Korea Institute of Industrial Technology in 1997-1998. He joined Seoul National University of Science and Technology as Professor in 1998. His major research areas are Mechatronics, Robotics, and Automation.

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 123 – 134


International Journal of Robotics and Automation (IJRA) Vol.9, No.2, June 2020, pp. 135~142 ISSN: 2089-4856, DOI: 10.11591/ijra.v9i2.pp135-142

135

Inverse kinematic analysis of 3 DOF 3-PRS PM for machining on inclined prismatic surfaces Hishantkumar Rashmikantbhai Patel, Yashavant Patel Mechanical Engineering Department, A D Patel Institute of Technology, India

Article Info

ABSTRACT

Article history:

Parallel Manipulators (PMs) are family members of modern manipulators based on the closed loop structural architecture. 3-PRS (prismatic, revolute, spherical) manipulator with 3DOF is investigated for its machining capability on prismatic surfaces as it possesses greater structural stiffness, higher pay load caring capacity, more precision compare to serial manipulators as well as less accumulation of errors at joints within a constrained workspace. The said manipulator can be utilized in various fields of application such as precise manufacturing, medical surgery, space technology and many more. In this paper, the primary focus on usage of parallel manipulator in industrial applications such as drilling and grooving on inclined work part surface. Inverse kinematic solutions are used for drilling, square and round profiles on inclined surface using parallel manipulator.

Received Sep 18, 2019 Revised Oct 06, 2019 Accepted Feb 18, 2020 Keywords: 3 PRS Inverse kinematic Parallel manipulator Prismatic surfaces

This is an open access article under the CC BY-SA license.

Corresponding Author: Hishantkumar Rashmikantbhai Patel, Mechanical Engineering Department, A D Patel Institute of Technology, New Vallabh Vidyanagar Gujarat, India. Email: hishupatel111@gmail.com

1.

INTRODUCTION In recent era, there are lot of research works going on parallel manipulators (PMs)due to their inherent qualities for precise manufacturing or precise working in smaller operating workspace. Following three elements are commonly used in any parallel mechanisms such as, 1) fixed platform, 2) connecting links and 3) moving platform. Various configurations of parallel mechanisms and their applications such as food packaging industry and automatic spray printing etc. Cable driven robots are applicable in cutting, excavating as well as grinding. 3PUU and 3-PRS parallel manipulators are used in medical and machining applications respectively as reported by Y.D Patel et al. [1]. Now days, high- speed machining of extra-large component with complex geometries is one of the challenging problem in machine tool industry. For instance, machining an aircraft component having large cross section would require a huge gantry 5-axis machine tool with tons of weight and large footprint. One auspicious alternative solution is the use of parallel kinematic mechanism in place of large machine tool. This proposal has been fully exemplified by commercial susses of Tricept robotics [2] and sprint Z3 [3] head in locomotive and aerospace industries. HUANG, et al. proposed novel parallel kinematic machine (PKM) named A3 head which has one transitional and two rotational capabilities. Added by x-y motion, proposed module can be used as a multiple axis spindle head to form a 5-axis high speed machining unit. Motivated by machining using conventional machine required additional support structure like jigs and fixture. One can use this parallel kinematic structure in place of conventional machine which reduce production time as well as production cost. Forward kinematic is required for synthesis of new robotics configuration. Inverse kinematic is mandatory for its real application during machining requirements. The direct kinematic analysis Journal homepage: http://ijra.iaescore.com


136

ISSN:2089-4856

as well as work space is generated for 3DOF 3-PRS (prismatic, revolute and spherical) parallel manipulator reported by Y. D Patel et al. [4]. Y.D Patel et al. are carried out inverse kinematic analysis of 3DOF 3PRS parallel manipulator based on constraint equation [5]. Such 3 DOF 3-PRS parallel mechanism is used for generating complex geometry on inclined surfaces. Pond et al. presented variation in vertical guide way of 3 –PRS manipulator and state that Tsai et al. Model has maximum reachable workspace compare to Carretero et al. And Merlet‘s model [6]. Arockia Selvakumar A et al. proposed Three methods are used for position analysis of 3 DOF tripode and tri-glider. 1) Simulation in ADAMS software, 2) fabricated model and 3) analytical method [7]. Saioa Herrero et al. are used2PRU-1PRS parallel manipulator in place of 3 PRS parallel manipulator for kinematic investigation [8]. In parallel manipulator, torque at joints and angular tilt of mobile platform are key parameters for accurate position as well as orientation. Comparative study of 3UPS, 3-RPR and 3-RPS 3DOF parallel manipulators are carried out based on the torque at joints, angular tilt of moving platform and positional accuracy by A. Arockia et al. [9]. A 3-PRS parallel manipulator with adjustable layout angle of actuators is used for kinematic investigation. Yangmin Li et al. reported maximum reachable work space occurred at [10]. Pardeshi et al. presented parallel mechanism with less degree of freedom is more suitable for drilling, milling, welding and tapping application. 3 DOF parallel manipulators have grater advantages over high degree of freedom parallel manipulators in terms of manufacturing and operating cost. 3 DOF parallel manipulators can be classified in three categories‖ Planner parallel manipulator (3 translational motion) Spherical parallel manipulator (3 rotational motion) Spatial parallel manipulator (Mixed motion) [11]. Tsai et al. [12] represent the geometric relation between sliders, links and moving platform are built to formulate three nonlinear trigonometric equations and this nonlinear equation are solved using Bezouts methods. Cherfia et al. [13] are carried out inverse and direct kinematic analysis of constrained parallel robot with 3 dof and four passive segments. Validate the results of direct and inverse kinematic with experimental prototype. Zhang et al. [14] carried out inverse position analysis of 3 DOF A3 head. Zhang et al. [14] state that structural parameters of 3-RPS mechanism have a slight effect on the joints motion. Zhang et al. [15] depicts the effect of the radii of the platform and the base along with the cross-section of the limb on the lower order frequency. This data is useful in the design stage. In this paper, spatial parallel manipulator with linear actuations of lead screw are determined for drilling holes on inclined surfaces with confined workspace. 3 DOF parallel manipulator with moving platform is connected with base platform by means of links with prismatic-revolute–spherical joints. Revolute and spherical joints are passive whereas prismatic joints are active joints. This paper deals with inverse kinematic analysis of said parallel manipulator on inclined prismatic surfaces. This work is organized in five sections in this paper. Architecture of proposed manipulator is discussed in second section. Third section deals with mobility equation as well as degree of freedom calculation for 3-PRS parallel mechanism. Inverse kinematic analysis of 3-PRS parallel manipulator is carried out in fourth section. In the fifth section, position analysis for drilling and profile generation is carried out for proposed manipulator.

2.

ARCHITECTURAL DESCRIPTION The schematic representation of proposed 3-DOF parallel manipulator is shown in Figure 1. Moving platform is connected to the base platform by three limbs. Each limb consists of prismatic-revolute-spherical joints. These three limbs will make manipulator symmetric. Each limb has one active prismatic joint and two passive revolute and spherical joints. All active joints are located on/near to the base platform. Three active prismatic joints have recirculating ball screw mechanism which enables us to stop prismatic motion at any instant without any slippage. Simultaneous variable motions of prismatic joints are responsible for the angular tilt of the moving platform, which is desired for drilling application. Here, , , and are lengths of connecting links 1, 2 and 3 respectively. All prismatic joints makes with each other, if vectors are drawn from origin ‗O‘ in XY plane. Fixed and moving platforms are equilateral triangles in shape as shown in Figure 2.

3.

MOBILITY AND DEGREE OF FREEDOM Mobility analysis is carried out for many parallel manipulators using Grübler–Kutzbach criterion. Using Grübler–Kutzbach equation for 3 PRS parallel mechanism,

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 135 – 142


Int J Rob & Autom

ISSN: 2089-4856

(

)

(

)

(

137

(1) )=3

here, degree of freedom of space utilized for a mechanism while in operation. For above configuration, for spatial motion , is a number of links, is number of joints and is degree of freedom for joints. Hence, 3-PRS manipulator has three degrees of freedom. Grübler–Kutzbach criterion was specifically used for determining the number of degree of freedom. It cannot find what type of degree of freedom present in the manipulator. It was observed that 3 DOF manipulator has one translation motion along the Z axis and two rotational motions about X and Y axis.

Figure 1. Schematic diagram of 3 PRS parallel mechanism

Figure 2. Planner view of 3 PRS mechanism

4.

INVERSE KINEMATIC ANALYSIS Inverse kinematic problem can be defined as placement of tool frame at desired position and orientation by means of actuation. The purpose of inverse kinematic problem is to determine actuated variable from a given position of end effector. Inverse kinematic analysis was used in practical application of manipulator. Figure 3, shows position vectors of spherical joints with respect to reference frame , , and are linear actuation of prismatic joints of limbs 1, 2 and 3 respectively. is a distance between two lead screw, is a centre to centre distance between to spherical joints and b is centre to centre distance between prismatic joint and revolute joint. Spherical joints coordinates are expressed in the form of, (

)

(2)

is a normal vector passing through the point and centre points coordinates of any spherical joints lying in this plane with coordinates ( ), as shown in Figure 4. The equation of sphere passing through three spherical joints is expressed as, (

)

here

(

and

) and

[

(

)

(3)

are spherical joints coordinate in x, y and z axis respectively.

]

Spherical joints angles are determined by following (4), √

Inverse kinematic analysis of 3 DOF 3-PRS PM for machining... (Hishantkumar Rashmikantbhai Patel)


138

ISSN:2089-4856

(4)

Figure 4. Normal vector passing through the plane

Figure 3. Position vector of spherical joint

Case study 1: Structural parameters considered for 3PRS manipulator (Table 1 and Figure 5) are: ( (

) ) Table 1. Joints parameters for case study 1 Linear actuation Required linear actuation Required linear actuation Required linear actuation

Figure 5. Tool position and orientation after actuation

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 135 – 142


Int J Rob & Autom

ISSN: 2089-4856

139

5.

POSITION ANALYSIS OF MANIPULATOR FOR DRILLING AND SLOTTING OPERATION ON PRISMATIC SURFACE Placement of tool frame at pre-defined point on surface is called position analysis. In which, this pre-defined point is considered as a tool tip. The tool tip coordinates are used to determine linear actuations of recirculating ball screw. Figure 6, shows prismatic part with inclined surface. Inclination angle of inclined surface will be varied as per requirements. Selected dimension of prismatic part was lies in the workspace of the said manipulator. Consider one new coordinate system ‗ at one bottom corner of the prismatic part. This prismatic part was placed at off-set to the 3 PRS parallel manipulator reference frame . The points required for a position analysis should be measure from the reference coordinate system . It should be essential to find out normal vector to the inclined surface for proper orientation of the tool frame. In Figure 7 show normal vector to the plane as well as offset distance between two coordinate systems. Determined by (4). ⃗​⃗​⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗​⃗​⃗​⃗

( ⃗​⃗​⃗​⃗​⃗​⃗​⃗ |

(

)

(

)

(5)

⃗​⃗​⃗​⃗​⃗​⃗​⃗ |

)

Here, ⃗​⃗​⃗​⃗​⃗​⃗ Is position vector between point 2 and 1 ⃗​⃗​⃗​⃗​⃗​⃗ Is position vector between point 3 and 1 TL is a tool length (vector length) Is offset distance between ‗ ‘ and Following steps are followed in position analysis: 1. Find out points coordinate with respect to mechanism reference frame ‗ . (⃗​⃗ )

( )

2. Determine the normal vector to the plane and its end point. ⃗​⃗​⃗​⃗​⃗​⃗​⃗ ⃗​⃗​⃗​⃗​⃗​⃗​⃗

( ⃗​⃗​⃗​⃗​⃗​⃗​⃗ |

⃗​⃗​⃗​⃗​⃗​⃗​⃗ |

)

3. Compute moving platform centre point (mcp) ( )

Position analysis of manipulator for drilling and slotting operation on inclined surface are shown in case study 2(a) and 2(b) respectively.

Figure 7. Normal vector and its end point on prismatic parts Inverse kinematic analysis of 3 DOF 3-PRS PM for machining... (Hishantkumar Rashmikantbhai Patel)

Figure 6. Number of points on prismatic parts


140

ISSN:2089-4856

Case study 2(a) Consider a one point on inclined prismatic surface Offset distance, ( Tool tip coordinates ( ) ( Normal vector to the plane ⃗ ( ̂ ̂ Moving platform centre point coordinates (

)

(

) ) ̂) ⃗

(

)

)

Analytically, three spherical joints centre point coordinates (Table 2) are determined by (1) and (2). (

)

(

)

(

)

Table 2. Joint parameters for case study 2(a) Linear actuation Required linear actuation Required linear actuation Required linear actuation

Figure 8 shows position and orientation of the tool tip at given points after linear actuation of recirculating ball screw .

Figure 8. Graphical representation of the position analysis

Case study 2(b) Points are selected from the square shape area located on the inclined surface as shown in Figures 9 and 10. Shows position and orientation of the tool tip at given points after linear actuation of recirculating ball screw . Inclination angle of the inclined surface as well as offset distance is same as the previous case. Figure 11 represents the flowchart of position analysis to perform machining operations on an inclined prismatic surface for the 3-PRS manipulator. Input parameter Tool tip coordinates ( ) ̂) Normal vector to the plane, ⃗ ( ̂ ̂ Moving platform centre point coordinates ⃗ (

)

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 135 – 142


Int J Rob & Autom

ISSN: 2089-4856

141

Analytically, three spherical joints centre point coordinates are determined using (1) and (2). (

)

( (

) )

Note: Here we show position analysis for one point due to page restriction but one can follow same procedure for determining joint parameters for other points which are lies on the same profile. Procedure is shown in the form of flow chart. Table 3 shows results of active and passive joint parameters after inverse kinematic solutions.

Figure 9. Square shape on inclined surface

Figure 10. Position analysis on square profile

Figure 11. Flow chart for position analysis on inclined surfaces

Table 3. Joint parameters for case study 2(b) Linear actuation Required linear actuation, Required linear actuation, Required linear actuation,

Inverse kinematic analysis of 3 DOF 3-PRS PM for machining... (Hishantkumar Rashmikantbhai Patel)


142

ISSN:2089-4856

6.

CONCLUSION Drilling using convention machine is always performed on normal surfaces to predefined machine axis. Holes on inclined surfaces using such machines are feasible using suitable jigs and fixtures with limited accuracy along with proper orientation of job. It is very time consuming and costlier approach. In such circumstances, parallel manipulator could serve this problem within the limited workspace but with high precision and greater consistency. Position analysis of 3-PRS parallel manipulator is carried out using vector approach. Based on inverse kinematic analysis, manipulator is simulated in the MATLAB software as well graphically presented. It is observed that the offset distance is most affective parameter for machining on inclined work part for proposed manipulator. Inverse kinematic solutions are used for machining requirements with tool orientation to produce square and round profiles on inclined surfaces using 3-PRS parallel manipulator.

REFERENCES [1] [2] [3]

[4]

[5]

[6] [7] [8]

[9] [10] [11] [12] [13] [14] [15]

Y. D. Patel, P. M. George, ―Parallel Manipulators Applications—A Surve,‖ Modern Mechanical Engineering, vol. 2, no. 03, pp. 57-64, 2012. Caccavale F., Siciliano B., & Villani L., ―The Tricept robot: dynamics and impedance control,‖ IEEE/ASME transactions on mechatronics, vol. 8, no. 2, pp. 263-268, 2003. Hennes N., Staimer D., ―Application of PKM in aerospace manufacturing-high performance machining centers ECOSPEED, ECOSPEED-F and ECOLINER,‖ In Proceedings of the 4th Chemnitz parallel kinematics seminar, pp. 557-577, 2004. Y. D. Patel, P. M. George, ―Kinematic Analysis and 3D Workspace Development of 3DOF Parallel Manipulator with Retry base,‖ Proceedings of the 1st International and 16th National Conference on Machines and Mechanisms (iNaCoMM2013), IIT Roorkee, India, Dec 18-20 2013 Y. D. Patel, P. M. George, ―Constraint and Inverse Kinematic Analysis 3-PRS Parallel Manipulator, 5th International & 26th All India Manufacturing Technology,‖ Design and Research Conference (AIMTDR 2014) December 12th–14th, 2014. Pond G., & Carretero J. A., ―Architecture optimisation of three 3-PRS variants for parallel kinematic machining,‖ Robotics and Computer-Integrated Manufacturing, vol. 25, no. 1, pp. 64-72, 2009. Selvakumar A. A., & Kumar M. A., ―Experimental investigation on position analysis of 3–DOF parallel manipulators,‖ Procedia Engineering, vol. 97, pp. 1126-1134, 2014. SaioaHerrero, Charles Pinto, Oscar Altuzarra, Mikel Diez, ―Analysis of the 2 P RU-1 P RS 3DOF parallel manipulator: kinematics, singularities and dynamics,‖ Robotics and Computer–Integrated Manufacturing, vol. 51, pp. 63–72, 2018. A. Arockia Selvakumar, R. SathishPandian, R. Sivaramakrishnan, K. Kalaichelvan, ―Simulation and performance study of 3 – DOF Parallel Manipulator Units,‖ INTERACT, 2010. Yangmin Li, Qingsong Xu, ―Kinematic analysis of a 3-PRS parallel manipulator,‖ Robotics and ComputerIntegrated Manufacturing vol. 23, no. 4, pp. 395–408, 2007. Pardeshi S. M., & Doss A. S. A., ―Kinematic and Velocity Analysis of 3-DOF Parallel Kinematic Machine for Drilling Operation,‖ Proceedings of the Advances in Robotics, no. 18, pp. 1-6, 2017. Tsai M. S., Shiau T. N., Tsai Y. J., & Chang T. H., ―Direct kinematic analysis of a 3-PRS parallel mechanism,‖ Mechanism and Machine Theory, vol. 38, no. 1, pp. 71-83, 2003. Cherfia A., Zaatri A., & Giordano M., ―Kinematics analysis of a parallel robot with a passive segment,‖ Ingeniare Revista chilena de ingeniería, vol. 15, no. 2, pp. 141-148, 2007. Zhang J., Li Y., & Huang T., ―Dynamic modeling and eigenvalue evaluation of a 3-DOF PKM module,‖ Chinese Journal of Mechanical Engineering, vol. 23, no.2, pp. 1-9, 2010. Zhang J., Zhao Y. Q., & Ceccarelli M., ―Elastodynamic Model-Based Vibration Characteristics Prediction of a Three Prismatic–Revolute–Spherical Parallel Kinematic Machine,‖ Journal of Dynamic Systems, Measurement, and Control, vol. 138, no. 4, pp. 041009-1-041009-14, 2016.

Int J Rob & Autom, Vol. 9, No. 2, June 2020 : 135 – 142


International Journal of Robotics and Automation (IJRA) Vol.9, No.2, June 2020, pp. 143~152 ISSN: 2089-4856, DOI: 10.11591/ijra.v9i2.pp143-152

143

Designing and testing of a smart firefighting device system (LAHEEB) Yousef Samkari, Kamel Guedri, Mowffaq Oreijah, Shadi Munshi, Sufyan Azam Department of Mechanical Engineering, Umm Al-Qura University, Saudi Arabia

Article Info

ABSTRACT

Article history:

The motivation behind this project is that firefighter's death. Many firefighters are struggling to perform their duty which causes much death while on a mission and the circumstances related to each incident. Firefighters are our heroes and our sense of security in times of trouble. They put themselves on dangerous situations to protect us. At present, the world is moving toward the use of technologies software and hardware. This paper proposed a smart firefighting device system (LAHEEB) which designed to detect the source of fire, extinguish it, and increase the knowledge about fire behavior from incident area. This device can extinguish different types of fire A, B, C, D, F/K, electric and metal fire without spreading in the shortest time. This device will reduce the risk of injury for firefighters and possible victims and decrease the monetary losses which increase considerably as fire duration increases. LAHEEB device is consists of two parts. The first part is called the mid-cap which presented the body of the device that has most of the components such as sensors, relays, battery, servo motor, liquid tank, push-button, and Arduino. The second part is called the bottom-cap which presented the moving part of the device that has the significant components of LAHEEB such as servo motor, sprayer, and thermal camera. It also makes use of liquid-tank and spray mechanism for extinguishing the fire. The spraying nozzle is mounted on a servo motor to cover maximum area. Liquid-extinguisher is pumped from the main tank to the nose with the help of a pump. The whole system is programmed using an Arduino DUE board which forms the brain of the system.

Received Aug 16, 2019 Revised Oct 10, 2019 Accepted Feb 18, 2020 Keywords: Extinguishing Firefighting Robot Robotics Safety

This is an open access article under the CC BY-SA license.

Corresponding Author: Yousef Samkari, Department of Mechanical Engineering, Umm Al-Qura University, Al Taif Road, Makkah 24382, Saudi Arabia. Email: josephsamkari@gmail.com

1.

INTRODUCTION Artificial intelligence is a subject that will be useful in many industries, while a robot is a machine that perform tasks usually human do it. The first uses of modern robots were in factories as industrial robots. It is a machine with manufacturing tasks, which allowed production without the need for human assistance [1]. The robots divided into several groups such as tele-robots, telepresence, mobile robots, and autonomous robots [1]. Telerobotic or teleportation is a technical name given to any handled device doing operations controlled by the operator, unlike the robots, the Tele-robots are restricted based on how the operator has a limited range of function and commands compare to the robots. Teleoperation known as telepresence, the human operator, has a sense of being on location so that the experience resembles virtual reality. Journal homepage: http://ijra.iaescore.com


144

ISSN:2089-4856

A telepresence robot is similar to a telerobot, and the only difference is providing a data response, such as video and sound. Therefore, telepresence robots commonly used in many fields requiring monitoring procedures, such as child services in nursery and education [2]. The mobile robot is requiring human beings to navigate and carry out tasks [3], while autonomous robots can perform the task independently without the involvement of human beings [4]. Moreover, the industrial robots are multi-function manipulators designed for more specific materials, tools, or devices through numerous programmatic drives to perform several tasks [5]. Many studies and projects have shown that robot can be valuable in medicine [6], rescue operation [7, 8], industry [9] and rehabilitation [10-12]. The use of robots is increasing and become more common today than ever before, and the fire extinguisher robot becomes essential to protect human life. The robot can detect and extinguish a fire by itself [13]. There are many projects related to firefighter robotics have been studied through this project to compare, improve, develop the study of the smart firefighting Device (LAHEEB). The following robots are some examples used to fight a fire in different applications, most of them have advantages and disadvantages that helped to improve this study. A smart firefighting robot system (LAHHB) [14], this paper provides a detail description of the smart system that help to build the firefighting robot. This robot has many advantages to detect different types of fire and avoid obstacles. Moreover, it helps to put out the source of the fire in a shortest time. From this concept the LAHEEB device is lunched in order to combine all functions with some new features such as a thermal camera to make the device even more accurate and precise in detecting fire and victims in area. Virtual reality simulation of firefighting robot [15], this system developed in MATLAB/Simulink. It placed for initial testing of control algorithms. This project clearly showed that the robot does not have enough level of functionality, because of low-detailed validation of environment. The robot could operate only in free space without obstacles. This robot worked as a firefighting robot in houses, and any municipal buildings called Fire protection Robot. In [16] there is a detailed description of using the absolute time firefighting apprentice which moves in an affiliated speed, analyze the blaze, and again extinguish it with the advice of pumping mechanism. The primary system's advantages are detecting fire by complex algorithms, using navigation and using of a sound sensor for activating, while the main disadvantages are a low-efficiency computer, low-power frame, an absence of mapping and return-back. Firefighting robot [17] is a robot that worked only 15 minutes and then returned to the supply station area. This principle is one of the great applications for firefighting that fit in non-industrial buildings such as houses. The main disadvantages are the little period of working time and low storage of water provided. Pokey the fire-fighting robot is the robot that listed for coemption and got improved. In [18] there is a detailed description of basic algorithms of operating and the user equipment. The robot equipped with sensors such as a line-sensor, but it does not work very well in a dense smoke area. The advantages of this robot is using of complex firefighting tool and two types of fire sensors. On the other hand, the disadvantages are working in a short distance paced on the sensors range less than 1.5m, also the absence of optical means of environment perception and low efficiency of the computer. In this paper, a smart firefighting device (LAHEEB) proposed because firefighters play a significant rule in societies therefore many studies are discussed on the use of device to minimize firefighters' injuries and losses as well as increasing efficiency, safety, and quality of the task and its procedures [19]. The primary function of this device is to detect the source of different types of fire, extinguish it and increase the knowledge about fire behavior from the incident area. There are several existing types of robot for firefighting at home and extinguish forest fires [20]. By using such robots, fire identification and rescue activities can accomplish with higher security and without placing firefighters at high risk and dangerous conditions.

2.

METHODOLOGY In this section, the methodology procedures divided into three parts. All parts were assembled to accomplish the function of detecting fire, extinguish it and increase the knowledge about fire behavior. The first part is the mechanicals design structure of the device body. The second part is a hardware implementation of the used parts, while the third is the software design details. 2.1. Mechanical Design Structure SolidWorks software is used to produce a 3D schematic diagram of the bottom-cap of the smart firefighting device (LAHEEB) as shown in Figure 1. Also, it is used to present the mid-cap as shown in Figure 2. The main structure is consisting of two parts: the bottom-cap and the mid-cap. The thickness of the body is 2mm. The shape of the device is made from aluminum alloy to protect the electronic circuit. Int J Rob & Autom, Vol. 8, No. 2, June 2020 : 143 – 152


Int J Rob & Autom

ISSN: 2089-4856

145

The surface is smooth and painted. The alloy sheet is resistant to the heat of up to 200 C [21]. The body of aluminum contains holes that make it easier to the mounting of flame sensors all around the device to detect fire immediately as shown in Figure 3. Building the device is going through procedures of building the mid-cap begin with cutting aluminum sheets, pressing them to perform a straight shape, and then attaching them together. Next, building holes to fix flame sensor on them. Then, perform dry putty to prepare the device for the paint. The two caps attached together by servo motor and shaft as shown in the following Figure 3, a design of a complete LAHEEB device using Solidworks with dimensions. The assembly procedures are prepared step by step and tested to make sure everything work and perform correctly. Next, placing the thermal camera, liquid tank, battery, flame sensors, relay, servo motor, and circuit board. After assembling all parts on the device as shown in Figure 4.

Figure 1. 3D Structure of a bottom-cap of LAHEEB using Solidworks

Figure 2. 3D Structure of a mid-cap LAHEEB using SolidWorks

Bottom cap

100mm

Thermal Camera

260mm

126m m

76mm

Figure 3. A design of a complete LAHEEB device using Solidworks

sprayer

Flame sensor

Mid-cap

Figure 4. A smart firefighting device system (LAHEEB)

2.2. Hardware Implementation There are several parts of electronics that help in developing LAHEEB such as sensors, microcontroller, DC motors, Motor driver, servo motors and pump. Figure 5 shows the block diagram of LAHEEB process which consists of the input of flame sensor and thermal camera, Arduino DUE as the microcontroller, servo motor and pump as output to move liquid-extinguisher out of the tank. a. Flame sensors: The flame sensor, as shown in Figure 6, is able to detect a flame by sensing light wavelength between 760 and 1100 nanometers. The distance is range between 20 to 100cm. The detection angle is 60 degrees [22]. b. Servo motors: Servos are used to operate remote-controlled or radio-controlled toy cars, robots, and airplanes. Servo motors controlled by sending an electrical pulse of variable width or pulse width modulation (PWM), through the control wire [23]. A servo motor has 90 degrees in either direction for a total of 180-degree movement. c. Fluid Pump: The Fluid pump is a relevant section in this robot as it will pump fire terminate fluid to extinguish the fire immediately no matter what is the class of fire that occurs. The fluid used because it has a proven certificate for extinguishing all types of fire. A very small-size and light-weight pump is the reason for selecting the pump in this project [24]. The working voltage for this pump is around 4 V to 12 V, and the working current is 1 A. d. Thermal Camera: The human eye can only see a slight part of the electromagnetic spectrum, so that humans cannot see ultraviolet light and infrared. Thermal imaging, as shown in Figure 7, is used as a valuable diagnostic tool for electrical and mechanical applications [25]. Designing and testing of a smart firefighting device system (LAHEEB) (Yousef Samkari)


146

ISSN:2089-4856

Figure 5. Block Diagram of LAHEEB

Figure 6. Flame sensor

Figure 7. Thermal camera

2.3. Software design details Over the years Arduino has been the brain of many projects, from primary objects to complex scientific instruments [26, 27]. Arduino DUE used in this project for programming as shown in Figure 8. For programming the microcontrollers, the Arduino provides an integrated development environment (IDE) based on Processing language among with C and C++ languages. Arduino is used in this project because inexpensive, cross-platform in many operating systems, simple, clear programming environment and it has an open source Arduino IDE which makes it easy to write code and upload it to the board.

Figure 8. Arduino DUE microcontroller

3.

RESULTS AND DISCUSSION A smart firefighting device (LAHEEB) has been developed and successfully can detect the source of fire, extinguish it, and increase the knowledge about fire behavior from incident area. LAHEEB can find the source of fire by using a five-channel flame sensor and ultrasonic sensor. The flame sensor used to detect fire and its location. All sensors are connected to Arduino DUE among with pump and servo motors to control Int J Rob & Autom, Vol. 8, No. 2, June 2020 : 143 – 152


Int J Rob & Autom

ISSN: 2089-4856

147

the movement of the device in 360 and 90 up and down. If the flame sensor detects the fire, the servo motor will stop. The pump will start to react to push the fire terminator fluid in the source of the fire. Flame and thermal detection are the two types of detection that performed this project. The flame detection test is worked from seven flame sensors that built on LAHEEB device to detect all the range of a room, the test is running by comparing the seven sensors, so the high value is used to move the servo to the position of the that flame and centering the fire source as shown in Figure 9. Three tests are operated to ensure that the device provide an accurate result, Figures 10 and 11 show the ability of the device to put out the first source of fire, and then moved to the second source of fire and start firefight it. After that, the device also detects the third source of fire and turn the pump on. The figure clearly shows how the bottom-cap turn to the source of fire by using flame and thermal camera. After that, the device checks the source and center the sprayer to pump the fire terminator, Figure 12 show how the device pumps the liquid through sprayer and how the traditional firefighting system does not sense the fire at all. Thermal Detection is considered one of the most important tests in this device since the camera brings the high resolutions data and an accurate result as well. The idea is to check if the fire is really existing according to flame sensor detection or not. Moreover, it gives a correct location and help to record the situation, so it will help to know what the reason of the fire is. Testing the thermal camera operation runs through Matlab to identify how camera works, and how the information could be useful in the device to detect the source of the fire with its location. Figure 13 shows the Matlab result and how it sees the surfaces of the object, it also shows the source of fire with the human detection as well. Using this particular thermal camera could help to detect the fire very easily and quickly, Figures 14-16 also some experiments that help to test the quality of the thermal camera. Figures 14-16 show how the camera sees the fire and how the camera sees humans among with the fire. These two experiments not only help to detect the fire but also to detect if there is a victim in the fire area, which will consider a really important information to be send to firefighters. To sum up, a smart firefighting device system (LAHEEB) can find the source of fire by using seven flame sensors and thermal camera. The flame sensor used to detect fire and its location while thermal used to detect the exact location and make sure there is a fire. The procedure of the device in detecting is described clearly in the flowchart of smart firefighting device (LAHEEB) as shown in Figure 17. All sensors and camera are connected to Arduino DUE among with pump and servo motors to control the movement of the bottom-cap. If any flame sensor detects the fire, the servo motor will go to that location, then thermal camera will check and the pump start pushing the fire terminator fluid in the source of the fire.

Figure 9. A view of the first test of examine the fire inside the room

Designing and testing of a smart firefighting device system (LAHEEB) (Yousef Samkari)


148

ISSN:2089-4856

Figure 10. A view of a second examine of the fire inside the room

Figure 11. A view of a third examine of the fire inside the room

Figure 12. The device pumping the liquid to the fire source through sprayer Int J Rob & Autom, Vol. 8, No. 2, June 2020 : 143 – 152


Int J Rob & Autom

ISSN: 2089-4856

149

Figure 13. Testing the thermal camera results by using Matlab

Figure 14. Thermal camera test of detecting the surface of the human and the source of fire

Figure 15. Thermal camera test of detecting the source of fire and human hand

Designing and testing of a smart firefighting device system (LAHEEB) (Yousef Samkari)


150

ISSN:2089-4856

Figure 16. Thermal camera test of detecting the source of fire and human hand

Figure 17. Flowchart of smart firefighting device system (LAHEEB)

Int J Rob & Autom, Vol. 8, No. 2, June 2020 : 143 – 152


Int J Rob & Autom

ISSN: 2089-4856

151

4.

CONCLUSION In the present work, the designing and testing of a smart firefighting device system (LAHEEB) had been discussed. The above device is designed to detect the source of fire, extinguish it, and increase the knowledge about fire behavior from incident area. From the experimental results, a smart fire-fighting device (LAHEEB) has achieved its aim and objective successfully. The device developed to help firefighters in their duty. It has advantageous features such as the ability to detect the source of fire, extinguish it and increase the knowledge about fire behavior from the incident area. LAHEEB can extinguish different types of fire A, B, C, D, F/K, electric and metal fire without spreading in the shortest time. This robot will reduce the risk of injury for firefighters and possible victims and decrease the monetary losses which increase considerably as fire duration increases. LAHEEB also can avoid hitting obstacles or surrounding objects by using sensors.

REFERENCES [1] RobotShop Distribution Inc. History of Robotics: Timeline, 2008. [2] F. Tanaka, et al., “Telepresence robot helps children in communicating with teachers who speak a different language,” in Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, Bielefeld, Germany, 2014, pp. 399-406. [3] E. H. Harik and A. Korsaeth, “Combining Hector SLAM and Artificial Potential Field for Autonomous Navigation Inside a Greenhouse,” Robotics, vol. 7, no. 2, pp. 22, 2018. [4] H. Singh, “Design and Development of an Autonomous Robot,” 2019. [5] J. Lee, et al., “Industrial robot calibration method using denavit—Hatenberg parameters,” in 17th International Conference on Control, Automation and Systems (ICCAS), 2017, pp. 1834-1837. [6] S. Jeelani, et al., “Robotics and medicine: A scientific rainbow in hospital,” Journal of Pharmacy & Bioallied Sciences, vol. 7, no. Suppl 2, pp. S381-S383, 2015. [7] M. Yusof and T. Dodd, “Pangolin: A Variable Geometry Tracked Vehicle with Independent Track Control,” Field Robotics, pp. 917-924, 2017. [8] C. Xin, et al., “Design and Implementation of Debris Search and Rescue Robot System Based on Internet of Things,” in International Conference on Smart Grid and Electrical Automation (ICSGEA), pp. 303-307, 2018. [9] C.-P. Day, “Robotics in Industry—Their Role in Intelligent Manufacturing,” Engineering, vol. 4, no. 4, pp. 440-445, 2018. [10] M. Aliff, S. Dohta, and T. Akagi, “Simple Trajectory Control Method of Robot Arm Using Flexible Pneumatic Cylinders,” Journal of Robotics and Mechatronics, vol. 27, no. 6, pp. 698-705, 2015. [11] M. Aliff, S. Dohta, and T. Akagi, “Control and analysis of simple-structured robot arm using flexible pneumatic cylinders,” International Journal of Advanced and Applied Sciences, vol. 4, no. 12, pp. 151-157, 2017. [12] M. Aliff, S. Dohta, and T. Akagi, “Control and analysis of robot arm using flexible pneumatic cylinder,” Mechanical Engineering Journal, vol. 1, no. 5, pp. DR0051-DR0051, 2014. [13] A. Dhumatkar, et al., “Automatic Fire Fighting Robot,” International Journal of Recent Research in Mathematics Computer Science and Information Technology, 2015. [14] Y. Samkari, et al., “A Smart Firefighting Robot System (LAHEEB),” International Journal of Engineering and Technology, vol. 11, pp. 359-366, 2019, 10.21817/ijet/2019/v11i2/191102065. [15] J.D. Setiawan, M. Subchan, A. Budiyono, “Virtual Reality Simulation of Fire Fighting Robot Dynamic and Motion,” in ICIUS, October 24-26, 2007. [16] C. Flesher, et al., “Fire Protection Robot Final Report,” pp. 1-78, 2004. [17] M. Durkin, et al., “Firefighting Robot: A Proposal,” May 5, 2008. [18] G. Weed, et al., “Pokey the Fire-Fighting Robot–A Logical Design Using Digital and Analog Circuitry,” 1999. [19] J.-H. Kim, S. Jo, and B.Y. Lattimer, “Feature Selection for Intelligent Firefighting Robot Classification of Fire, Smoke, and Thermal Reflections Using Thermal Infrared Images,” Journal of Sensors, pp. 13, 2016. [20] R. N. Haksar and M. Schwager, “Distributed Deep Reinforcement Learning for Fighting Forest Fires with a Network of Aerial Robots,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1067-1074, 2018. [21] S. Jiang, et al., “Buckling Behaviour of Aluminium Alloy Columns under Fire Conditions,” Thin-Walled Structures, vol. 124, pp. 523–537, 2018, doi:10.1016/j.tws, 2017.12.035. [22] M. Aliff, et al. “Development of Fire Fighting Robot (QRob),” International Journal of Advanced Computer Science and Applications, vol. 10, no. 1, 2019, doi:10.14569/ijacsa.2019.0100118. [23] A. M. A. Ghiet and A. Baba, “Robot Arm Control with Arduino,” 2017, doi: 10.13140/RG.2.2.10227.53286. [24] M. S. Nagesh, et al. “Fire Extinguishing Robot,” Ijarcce, vol. 5, no. 12, pp. 200–202, 2016, doi:10.17148/ijarcce.2016.51244. [25] M. Rajesvari, et al., “Autonomous Fire Fighting Robot with Multi Sensor Fire Detection Using Arduino,” in National Conference on Emerging Technologies for Sustainable Engineering & Management (NCETSEM'18), 2018. [26] Arduino DUE, www://store.arduino.cc/usa/due, 2019. [27] S. Ferdoush, X. Li, “Wireless sensor network system design using Raspberry Pi and Arduino for environmental monitoring applications,” Procedia Computer Science, vol. 34, pp. 103–110, 2014.

Designing and testing of a smart firefighting device system (LAHEEB) (Yousef Samkari)


152

ISSN:2089-4856

BIOGRAPHIES OF AUTHORS Yousef Samkari received his bachelor’s degree in Electrical Engineering from Gannon University in 2015 and his master’s degree in Mechanical Engineering from Umm Al-Qura University in 2019. He worked in the department of engineering, where he was responsible for research projects in the area of robotics.

Kamel Guedri received his bachelor’s, master’s, and PhD degree in Energy Engineering He worked as a consular of Umm Al-Qura University (UQU) for the branches. Then he worked as the Director of Mechanical Engineering.

Mowffaq Oreijah received his master’s degree from the University of Melbourne in 2009 and his PhD from the RMIT University in 2015. He worked as assistant professor in the department of mechanical engineering and had assigned as the Chairman of Mechanical Engineering, Umm Alqura University (UQU) for two years. Then he worked as the Director of Intellectual Property in Umm Alqura University (UQU) and Dean of Institute of Entrepreneurship and Innovation.

Shadi Munshi received his Ph.D. in Mechanical/Mechatronic Engineering with fifteen years of academic experience in Sydney-Australia and Makkah-Saudi Arabia. Specialized in Mechatronic and Mechanical Engineering, Designing and Modelling.

Sufyan Azam received his Mechanical/Mechatronic Engineering - Sensors, MEMS, Nano Composite Materials, Design, Vibration, Rapid Prototyping and Reverse Engineering.

Int J Rob & Autom, Vol. 8, No. 2, June 2020 : 143 – 152


Institute of Advanced Engineering and Science Indonesia : D2, Griya Ngoto Asri, Bangunharjo, Sewon, Yogyakarta 55187, Indonesia Malaysia : 51 Jalan TU 17, Taman Tasik Utama, 75450 Malacca, Malaysia

COPYRIGHT TRANSFER FORM (Please compile this form, sign and send by e-mail) Please complete and sign this form and send it back to us with the final version of your manuscript. It is required to obtain a written confirmation from authors in order to acquire copyrights for papers published in the International Journal of Robotics and

Automotion (IJRA) Full Name and Title Organisation Address and postal code City Country Telephone/Fax E-mail Paper Title Authors

Copyright Transfer Statement The copyright to this article is transferred to Institute of Advanced Engineering and Science (IAES) if and when the article is accepted for publication. The undersigned hereby transfers any and all rights in and to the paper including without limitation all copyrights to IAES. The undersigned hereby represents and warrants that the paper is original and that he/she is the author of the paper, except for material that is clearly identified as to its original source, with permission notices from the copyright owners where required. The undersigned represents that he/she has the power and authority to make and execute this assignment. We declare that: 1. This paper has not been published in the same form elsewhere. 2. It will not be submitted anywhere else for publication prior to acceptance/rejection by this Journal. 3. A copyright permission is obtained for materials published elsewhere and which require this permission for reproduction. Furthermore, I/We hereby transfer the unlimited rights of publication of the above mentioned paper in whole to IAES. The copyright transfer covers the exclusive right to reproduce and distribute the article, including reprints, translations, photographic reproductions, microform, electronic form (offline, online) or any other reproductions of similar nature. The corresponding author signs for and accepts responsibility for releasing this material on behalf of any and all co-authors. This agreement is to be signed by at least one of the authors who have obtained the assent of the co-author(s) where applicable. After submission of this agreement signed by the corresponding author, changes of authorship or in the order of the authors listed will not be accepted.

Retained Rights/Terms and Conditions 1. 2.

3.

Authors retain all proprietary rights in any process, procedure, or article of manufacture described in the Work. Authors may reproduce or authorize others to reproduce the Work or derivative works for the author’s personal use or for company use, provided that the source and the IAES copyright notice are indicated, the copies are not used in any way that implies IAES endorsement of a product or service of any employer, and the copies themselves are not offered for sale. Although authors are permitted to re-use all or portions of the Work in other works, this does not include granting third-party requests for reprinting, republishing, or other types of re-use.

Yours Sincerely,

Corresponding Author‘s Full Name and Signature Date: ……./……./…………

IAES Journals, http://ijra.iaescore.com/index.php/IJRA International Journal of Robotics and Automotion (IJRA), email: iaes.ijra@gmail.com, info@iaejournal.com


Guide for Authors International Journal of Robotics and Automation (IJRA) is the official publication of the Institute of Advanced Engineering and Science (IAES). The journal is an open access, international peer reviewed journal which is providing a platform to researchers, scientists, engineers and practitioners/professionals throughout the world to publish the latest creations and achievement, future challenges and exciting applications of manufacture and applications of robots and computer systems for their control and automation, sensory feedback, and information technology to reduce the need for human work. IJRA aims to publish most complete and reliable source of information on the discoveries and current developments in the mode of original articles, review articles, case reports and short communications in all areas of the field and making them freely available through online without any restrictions or any other subscriptions to researchers worldwide. Papers are invited from anywhere in the world, and so authors are asked to ensure that sufficient context is provided for all readers to appreciate their contribution.

The Types of papers The Types of papers we publishThe types of papers that may be considered for inclusion are: 1) Original research; 2) Short communications and; 3) Review papers, which include meta-analysis and systematic review

How to submit your manuscript All manuscripts should be submitted online at http://ijra.iaescore.com/index.php/IJRA

General Guidelines 1) Use the IJRA guide (http://tiny.cc/q32b4y) as template. 2) Ensure that each new paragraph is clearly indicated. Present tables and figure legends on separate pages at the end of the manuscript. 3) Number all pages consecutively. Manuscripts should also be spellchecked by the facility available in most good word-processing packages. 4) Extensive use of italics and emboldening within the text should be avoided. 5) Papers should be clear, precise and logical and should not normally exceed 3,000 words. 6) The Abstract should be informative and completely self-explanatory, provide a clear statement of the problem, the proposed approach or solution, and point out major findings and conclusions. The Abstract should be 150 to 250 words in length. The abstract should be written in the past tense. 7) The keyword list provides the opportunity to add keywords, used by the indexing and abstracting services, in addition to those already present in the title. Judicious use of keywords may increase the ease with which interested parties can locate our article. 8) The Introduction should provide a clear background, a clear statement of the problem, the relevant literature on the subject, the proposed approach or solution, and the new value of research which it is innovation. It should be understandable to colleagues from a broad range of scientific disciplines. 9) Explaining research chronological, including research design and research procedure. The description of the course of research should be supported references, so the explanation can be accepted scientifically. 10) Tables and Figures are presented center. 11) In the results and discussion section should be explained the results and at the same time is given the comprehensive discussion. 12) A good conclusion should provide a statement that what is expected, as stated in the "Introduction" section can ultimately result in "Results and Discussion" section, so there is compatibility. Moreover, it can also be added the prospect of the development of research results and application prospects of further studies into the next (based on the results and discussion). 13) References should be cited in text by numbering system (in IEEE style), [1], [2] and so on. Only references cited in text should be listed at the end of the paper. One author should be designated as corresponding author and provide the following information: • E-mail address • Full postal address • Telephone and fax numbers Please note that any papers which fail to meet our requirements will be returned to the author for amendment. Only papers which are submitted in the correct style will be considered by the Editors.


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.