Menu
METHODS OF COMPARATIVE ANALYSIS OF BANKS FUNCTIONING: CLASSIC AND NEW APPROACHE
By: Alexander Kuzemin, Vyacheslav Lyashenko  (4463 reads)
Rating: (1.00/10)

Abstract: General aspects of carrying out of the comparative analysis of functioning and development of banks are considered. Classical interpretation of an estimation of efficient control by bank from the point of view of interrelation of its liquidity and profitableness is considered. Questions of existential dynamics in a system of comparative analysis of difficult economic processes and objects are generalised.

Keywords: bank, analysis, microsituation, statistical conclusion, nonlinear dynamics, Wilcoxon criterion.

ACM Classification Keywords: H.4.2. Information system Applications: Types of Systems Decision Support

Link:

METHODS OF COMPARATIVE ANALYSIS OF BANKS FUNCTIONING: CLASSIC AND NEW APPROACHES

Alexander Kuzemin, Vyacheslav Lyashenko

http://foibg.com/ijita/vol16/IJITA16-4-p07.pdf

EXTENDED ALGORITHM FOR TRANSLATION OF MSC-DIAGRAMS INTO PETRI NETS
By: Sergiy Kryvyy, Oleksiy Chugayenko  (3707 reads)
Rating: (1.00/10)

Abstract: The article presents an algorithm for translation the system, described by MSC document into ordinary Petri Net modulo strong bisimulation. Only the statical properties of MSC document are explored – condition values are ignored (guarding conditions are considered always true) and all loop boundaries are interpreted as <1,inf>. Obtained Petri Net can be later used for determining various system’s properties as statical as dynamic (e.g. liveness, boundness, fairness, trap and mutual exclusion detection). This net regains forth and back traceability with the original MSC document, so detected errors can be easily traced back to the original system. Presented algorithm is implemented as a working prototype and can be used for automatic or semi-automatic detection of system properties. The algorithm can be used for early defect detection in telecommunication, software and middleware developing. The article contains the example of algorithm usage for correction error in producer-consumer system.

Keywords: MSC, Petri Net, model checking, verification, RAD.

ACM Classification Keywords: D.2.4 Software/Program Verification - Formal methods, Model checking

Link:

EXTENDED ALGORITHM FOR TRANSLATION OF MSC-DIAGRAMS INTO PETRI NETS

Sergiy Kryvyy, Oleksiy Chugayenko

http://foibg.com/ijita/vol16/IJITA16-4-p06.pdf

COGNITIVE APPROACH IN CASTINGS’ QUALITY CONTROL
By: Polyakova et al.  (3869 reads)
Rating: (1.00/10)

Abstract: Every year production volume of castings grows, especially grows production volume of non-ferrous metals, thanks to aluminium. As a result, requirements to castings quality also increase. Foundry men from all over the world put all their efforts to manage the problem of casting defects. The authors suggest using cognitive approach to modeling and simulation. Cognitive approach gives us a unique opportunity to bind all the discovered factors into a single cognitive model and work with them jointly and simultaneously. The method of cognitive modeling (simulation) should provide the foundry industry experts a comprehensive instrument that will help them to solve complex problems such as: predicting a probability of the defects’ occurrence; visualizing the process of the defects’ forming (by using a cognitive map); investigating and analyzing direct or indirect “cause-and-effect” relations. The cognitive models mentioned comprise a diverse network of factors and their relations, which together thoroughly describe all the details of the foundry process and their influence on the appearance of castings’ defects and other aspects. Moreover, the article contains an example of a simple die casting model and results of simulation. Implementation of the proposed method will help foundry men reveal the mechanism and the main reasons of casting defects formation.

Keywords: castings quality management, casting defects, expert systems, computer diagnostics, cognitive model, modeling, simulation.

ACM Classification Keywords: I.6.5 Computing Methodologies - Simulation and Modelling Model Development - Modeling methodologies

Link:

COGNITIVE APPROACH IN CASTINGS’ QUALITY CONTROL

Irina Polyakova, Jürgen Bast, Valeriy Kamaev, Natalia Kudashova, Andrey Tikhonin

http://foibg.com/ijita/vol16/IJITA16-4-p05.pdf

PRESENTATION OF ONTOLOGIES AND OPERATIONS ON ONTOLOGIES IN FINITE-STATE ...
By: Sergii Kryvyi, Oleksandr Khodzinskyi  (4391 reads)
Rating: (1.00/10)

Abstract: A representation of ontology by using finite-state machine is considered. This representation allows introducing the operations on ontologies by using regular algebra of languages. The operations over ontologies allow automating the process of analysis and synthesis for ontologies and their component parts.

Keywords: ontology, operations, finite automata.

ACM Classification Keywords: I.2.4 Knowledge Representation Formalisms and Methods; F.4.1 Finite Automata

Link:

PRESENTATION OF ONTOLOGIES AND OPERATIONS ON ONTOLOGIES IN FINITE-STATE MACHINES THEORY

Sergii Kryvyi, Oleksandr Khodzinskyi

http://foibg.com/ijita/vol16/IJITA16-4-p04.pdf

WEBLOG CLUSTERING IN MULTILINEAR ALGEBRA PERSPECTIVE
By: Andri Mirzal  (4539 reads)
Rating: (1.00/10)

Abstract: This paper describes a clustering method for labeled link network (semantic graph) that can be used to group important nodes (highly connected nodes) along with their relevant link’s labels by using a technique borrowed from multilinear algebra known as PARAFAC tensor decomposition. In this kind of network, the adjacency matrix can not be used to fully describe all information about the network structure. We have to expand the matrix into 3-way adjacency tensor, so that not only the information about to which nodes a node connects to but by which link’s labels is also included. And by applying PARAFAC decomposition, we get two lists, nodes and link’s labels with scores attached to them for each decomposition group. So clustering process to get the important nodes along with their relevant labels can be done simply by sorting the lists in decreasing order. To test the method, we construct labeled link network by using blog's dataset, where the blogs are the nodes and labeled links are the shared words among them. The similarity measures between the results and standard measures look promising, especially for two most important tasks, finding the most relevant words to blogs query and finding the most similar blogs to blogs query, about 0.87.

Keywords: Blogs, Clustering Method, Labeled-link Network, PARAFAC Decomposition.

ACM Classification Keywords: I.7.1 Document management

Link:

WEBLOG CLUSTERING IN MULTILINEAR ALGEBRA PERSPECTIVE

Andri Mirzal

http://foibg.com/ijita/vol16/IJITA16-4-p03.pdf

PARALLELIZATION METHODS OF LOGICAL INFERENCE FOR CONFLUENT RULE-BASED SYSTEM
By: Irene Artemieva, Michael Tyutyunnik  (3353 reads)
Rating: (1.00/10)

Abstract: The article describes the research aimed at working out a program system for multiprocessor computers. The system is based on the confluent declarative production system. The article defines some schemes of parallel logical inference and conditions affecting scheme choice. The conditions include properties of a program information graph, relations between data objects, data structures and input data as well.

Keywords: logical Inference, parallel rule-based systems

ACM Classification Keywords: D 3.2 – Constraint and logic languages, I 2.5 Expert system tools and techniques.

Link:

PARALLELIZATION METHODS OF LOGICAL INFERENCE FOR CONFLUENT RULE-BASED SYSTEM

Irene Artemieva, Michael Tyutyunnik

http://foibg.com/ijita/vol16/IJITA16-4-p02.pdf

CLASSIFICATION OF HEURISTIC METHODS IN COMBINATORIAL OPTIMIZATION
By: Sergii Sirenko  (4006 reads)
Rating: (1.00/10)

Abstract: An important for the scientific as well as the industrial world is the field of combinatorial optimization. These problems arise in many areas of computer science and other disciplines in which computational methods are applied, such as artificial intelligence, operation research, bioinformatics and electronic commerce. Many of combinatorial optimization problems are NP-hard and in this field heuristics often are the only way to solve the problem efficiently, despite the fact that the heuristics represent a class of methods for which in general there is no formal theoretical justification of their performance. A lot of heuristic methods possessing different qualities and characteristics for combinatorial optimization problems were introduced. One of the approaches to the description and analysis of these methods is classification. In the paper a number of different characteristics for which it is possible to classify the heuristics for solving combinatorial optimization problems are proposed. The suggested classification is an extension of the previous work in the area. This work generalizes existing approaches to the heuristics’ classification and provides formal definitions for the algorithms’ characteristics on which the classes are based. The classification describes heuristic methods from different viewpoints. Among main considered aspects is decision making approach, structure complexity, solution spaces utilized, memory presence, trajectory-continuity, search landscape modification, and adaptation presence.

Keywords: combinatorial optimization, classification of methods, heuristics, metaheuristics.

ACM Classification Keywords: G.1.6 Numerical Analysis Optimization, I.2.8 Artificial Intelligence: Problem Solving, Control Methods, and Search – Heuristic methods, General Terms: Algorithms.

Link:

CLASSIFICATION OF HEURISTIC METHODS IN COMBINATORIAL OPTIMIZATION

Sergii Sirenko

http://foibg.com/ijita/vol16/IJITA16-4-p01.pdf

A GENETIC AND MEMETIC ALGORITHM FOR SOLVING THE UNIVERSITY COURSE TIMETABLE ...
By: Velin Kralev  (3355 reads)
Rating: (1.00/10)

Abstract: In this paper genetic and memetic algorithms as an approach to solving combinational optimization problems are presented. The key terms associated with these algorithms, such as representation, coding and evaluation of the solution, genetic operators for the crossing, mutation and reproduction, stopping criteria and others are described. Two developed algorithms (genetic and memetic) with defined computational complexity for each of them, are presented. These algorithms are used in solving the university course timetable problem. The methodology and the object of study are presented. The main objectives of the planned experiments are formulated. The conditions for conducting experiments are specified. The developed prototype and its functionality are briefly presented. The results are analyzed and appropriate conclusions are formulated. The future trends of work are presented.

Keywords: genetic algorithm, memetic algorithm, university course timetable problem.

Link:

A GENETIC AND MEMETIC ALGORITHM FOR SOLVING THE UNIVERSITY COURSE TIMETABLE PROBLEM

Velin Kralev

http://foibg.com/ijita/vol16/IJITA16-3-p08.pdf

ANALOGICAL MAPPING USING SIMILARITY OF BINARY DISTRIBUTED REPRESENTATIONS
By: Serge V. Slipchenko, Dmitri A. Rachkovskij  (3898 reads)
Rating: (1.00/10)

Abstract: We develop an approach to analogical reasoning with hierarchically structured descriptions of episodes and situations based on a particular form of vector representations – structure-sensitive sparse binary distributed representations known as code-vectors. We propose distributed representations of analog elements that allow finding correspondence between the elements for implementing analogical mapping, as well as analogical inference, based on similarity of those representations. The proposed methods are investigated using test analogs and the obtained results are as those of known mature analogy models. However, exploiting similarity properties of distributed representations provides a better scaling, enhances the semantic basis of analogs and their elements as well as neurobiological plausibility. The paper also provides a brief survey of analogical reasoning, its models, and representations employed in those models.

Keywords: analogy, analogical mapping, analogical inference, distributed representation, code-vector, reasoning, knowledge bases.

ACM Classification Keywords: I.2 ARTIFICIAL INTELLIGENCE, I.2.4 Knowledge Representation Formalisms and Methods, I.2.6 Learning (Analogies)

Link:

ANALOGICAL MAPPING USING SIMILARITY OF BINARY DISTRIBUTED REPRESENTATIONS

Serge V. Slipchenko, Dmitri A. Rachkovskij

http://foibg.com/ijita/vol16/IJITA16-3-p07.pdf

DISTANCE MATRIX APPROACH TO CONTENT IMAGE RETRIEVAL
By: Kinoshenko et al.  (3576 reads)
Rating: (1.00/10)

Abstract: As the volume of image data and the need of using it in various applications is growing significantly in the last days it brings a necessity of retrieval efficiency and effectiveness. Unfortunately, existing indexing methods are not applicable to a wide range of problem-oriented fields due to their operating time limitations and strong dependency on the traditional descriptors extracted from the image. To meet higher requirements, a novel distance-based indexing method for region-based image retrieval has been proposed and investigated. The method creates premises for considering embedded partitions of images to carry out the search with different refinement or roughening level and so to seek the image meaningful content.

Keywords: content image retrieval, distance matrix, indexing.

ACM Classification Keywords: H.3.3 Information Search and Retrieval: Search process

Link:

DISTANCE MATRIX APPROACH TO CONTENT IMAGE RETRIEVAL

Dmitry Kinoshenko, Vladimir Mashtalir, Elena Yegorova

http://foibg.com/ijita/vol16/IJITA16-3-p06.pdf

THE CASCADE NEO-FUZZY ARCHITECTURE USING CUBIC–SPKINE ACTIVATION FUNCTIONS
By: Yevgeniy Bodyanskiy, Yevgen Viktorov  (4111 reads)
Rating: (1.00/10)

Abstract: in the paper new hybrid system of computational intelligence called the Cascade Neo-Fuzzy? Neural Network (CNFNN) is introduced. This architecture has the similar structure with the Cascade-Correlation? Learning Architecture proposed by S.E. Fahlman and C. Lebiere, but differs from it in type of artificial neurons. CNFNN contains neo-fuzzy neurons, which can be adjusted using high-speed linear learning procedures. Proposed CNFNN is characterized by high learning rate, low size of learning sample and its operations can be described by fuzzy linguistic “if-then” rules providing “transparency” of received results, as compared with conventional neural networks. Using of cubic-spline membership functions instead of conventional triangular functions allows increasing accuracy of smooth functions approximation.

Keywords: artificial neural networks, constructive approach, fuzzy inference, hybrid systems, neo-fuzzy neuron, cubic-spline functions.

ACM Classification Keywords: I.2.6 Learning – Connectionism and neural nets.

Link:

THE CASCADE NEO-FUZZY ARCHITECTURE USING CUBIC–SPKINE ACTIVATION FUNCTIONS

Yevgeniy Bodyanskiy, Yevgen Viktorov

http://foibg.com/ijita/vol16/IJITA16-3-p05.pdf

TRAINED NEURAL NETWORK CHARACTERIZING VARIABLES FOR PREDICTING ...
By: Sotto et al.  (3463 reads)
Rating: (1.00/10)

Abstract: Many organic compounds cause an irreversible damage to human health and the ecosystem and are present in water resources. Among these hazard substances, phenolic compounds play an important role on the actual contamination. Utilization of membrane technology is increasing exponentially in drinking water production and waste water treatment. The removal of organic compounds by nanofiltration membranes is characterized not only by molecular sieving effects but also by membrane-solute interactions. Influence of the sieving parameters (molecular weight and molecular diameter) and the physicochemical interactions (dissociation constant and molecular hydrophobicity) on the membrane rejection of the organic solutes were studied. The molecular hydrophobicity is expressed as logarithm of octanol-water partition coefficient. This paper proposes a method used that can be used for symbolic knowledge extraction from a trained neural network, once they have been trained with the desired performance and is based on detect the more important variables in problems where exist multicolineality among the input variables.

Keywords: Neural Networks, Radial Basis Functions, Nanofiltration; Membranes; Retention.

ACM Classification Keywords: K.3.2 Learning (Knowledge acquisition)

Link:

TRAINED NEURAL NETWORK CHARACTERIZING VARIABLES FOR PREDICTING ORGANIC RETENTION BY NANOFILTRATION MEMBRANES

Arcadio Sotto, Ana Martinez, Angel Castellanos

http://foibg.com/ijita/vol16/IJITA16-3-p04.pdf

EXTENDED NETWORKS OF EVOLUTIONARY PROCESSORS
By: Mingo et al.  (3543 reads)
Rating: (1.00/10)

Abstract: This paper presents an extended behavior of networks of evolutionary processors. Usually, such nets are able to solve NP-complete problems working with symbolic information. Information can evolve applying rules and can be communicated though the net provided some constraints are verified. These nets are based on biological behavior of membrane systems, but transformed into a suitable computational model. Only symbolic information is communicated. This paper proposes to communicate evolution rules as well as symbolic information. This idea arises from the DNA structure in living cells, such DNA codes information and operations and it can be sent to other cells. Extended nets could be considered as a superset of networks of evolutionary processors since permitting and forbidden constraints can be written in order to deny rules communication.

Keywords: Networks of Evolutionary Processors, Membrane Systems, and Natural Computation.

ACM Classification Keywords: F.1.2 Modes of Computation, I.6.1 Simulation Theory, H.1.1 Systems and Information Theory

Link:

EXTENDED NETWORKS OF EVOLUTIONARY PROCESSORS

Luis Fernando de Mingo, Nuria Gómez Blas, Francisco Gisbert, Miguel A. Peña

http://foibg.com/ijita/vol16/IJITA16-3-p03.pdf

FAST LINEAR ALGORITHM FOR ACTIVE RULES APPLICATION IN TRANSITION P SYSTEMS
By: Javier Gil et al.  (3433 reads)
Rating: (1.00/10)

Abstract: Transition P systems are computational models inspired on basic features of biological membranes and the observation of biochemical processes. In these models, membrane contains objects multisets, which evolve according to given evolution rules. The basis on which the computation is based cellular membranes is the basic unit for the structure and functioning of all living beings: the biological cell. These models called P systems or membrane systems, are caused by the need to find new forms of calculation that exceed the limits set by the complexity theory in conventional computing, drawing mainly distributed operation, non-deterministic and massively parallel with which the cells behave. In the field of Transition P systems implementation, it has been detected the necessity to determine whichever time are going to take active evolution rules application in membranes. In addition, to have time estimations of rules application makes possible to take important decisions related to the hardware / software architectures design. In this paper we propose a new evolution rules application algorithm oriented towards the implementation of Transition P systems. The developed algorithm is sequential and, it has a linear order complexity in the number of evolution rules. Moreover, it obtains the smaller execution times, compared with the preceding algorithms. Therefore the algorithm is very appropriate for the implementation of Transition P systems in sequential devices.

Keywords: Natural Computing, Membrane computing, Transition P System, Rules Application Algorithms

ACM Classification Keywords: D.1.m Miscellaneous – Natural Computing

Link:

FAST LINEAR ALGORITHM FOR ACTIVE RULES APPLICATION IN TRANSITION P SYSTEMS

Francisco Javier Gil, Jorge Tejedor, Luis Fernández

http://foibg.com/ijita/vol16/IJITA16-3-p02.pdf

GENE CODIFICATION FOR NOVEL DNA COMPUTING PROCEDURES
By: Goni Moreno et al.  (4729 reads)
Rating: (1.00/10)

Abstract: The aim of the paper is to show how the suitable codification of genes can help to the correct resolution of a problem using DNA computations. Genes are the income data of the problem to solve so the first task to carry out is the definition of the genes in order to perform a complete computation in the best way possible. In this paper we propose a model of encoding data into DNA strands so that this data can be used in the simulation of a genetic algorithm based on molecular operations. The first problem when trying to apply an algorithm in DNA computing must be how to codify the data that the algorithm will use. With preciseness, the gene formation exposed in this paper allows us to join the codification and evaluation steps in one single stage. Furthermore, these genes turn out to be stable in a DNA soup because we use bond-free languages in their definition. Previous work on DNA coding defined bond-free languages which several properties assuring the stability of any DNA word of such a language. We prove that a bond-free language is necessary but not sufficient to codify a gene giving the correct codification. That is due to the fact that selection must be done based on a concrete gene characterization. This characterization can be developed in many different ways codifying what we call the fitness field of the gene. It is shown how to use several DNA computing procedures based on genes from single and double stranded molecules to more complex DNA structures like plasmids.

Keywords: DNA Computing, Bond-Free? Languages, Genetic Algorithms, Gene Computing.

ACM Classification Keywords: I.6. Simulation and Modeling, B.7.1 Advanced Technologies, J.3 Biology and Genetics

Link:

GENE CODIFICATION FOR NOVEL DNA COMPUTING PROCEDURES

Angel Goni Moreno, Paula Cordero, Juan Castellanos

http://foibg.com/ijita/vol16/IJITA16-3-p01.pdf

KIRLIAN IMAGE PREPROCESSING DIAGNOSTIC SYSTEM
By: Vishnevskey et al.  (4031 reads)
Rating: (1.00/10)

Abstract: The information technology of Kirlian images preprocessing is developed for decision making support in the telemedicine diagnostic system. Particularly, the image preprocessing includes the selection of objects - fingers emissions in a general image. The algorithms and image processing examples are decrypted.

Keywords: information technology, Kirlian image, preprocessing.

ACM Classification Keywords: J.3 Life and medical sciences – Medical information systems

Link:

KIRLIAN IMAGE PREPROCESSING DIAGNOSTIC SYSTEM

Vitaly Vishnevskey, Vladimir Kalmykov, Tatyana Romanenko, Aleksandr Tugaenko

http://foibg.com/ijita/vol16/IJITA16-2-p07.pdf

USING RANDOMIZED ALGORITHMS FOR SOLVING DISCRETE ILL-POSED PROBLEMS
By: Elena G. Revunova, Dmitri A. Rachkovskij  (3214 reads)
Rating: (1.00/10)

Abstract: In this paper, we develop an approach for improving the accuracy and speed of the solution of discrete ill-posed problems using the pseudo-inverse method. It is based on a random projection of the initial coefficient matrix. First, a brief introduction is given to least squares and discrete ill-posed problems, approximate matrix decompositions with randomized algorithms, and randomized least squares approximations. Then, we describe the techniques used in this paper to solve discrete ill-posed inverse problems, including the standard Tikhonov regularization and pseudo-inverse after a random projection. The bias-variance decomposition of solution errors is provided for different solution techniques and it is shown experimentally that the error has a minimum at some value of the varied smaller dimension of the random projection matrix. Taking two well-known test examples of the discrete ill-posed problems of Carasso and Baart, we obtain experimental results of their solution. The comparison shows that the minimal error of the pseudo-inverse solution after a random projection is close to the error of the standard Tikhonov regularization, but the former solution is obtained faster, especially at the larger noise values.

Keywords: discrete ill-posed problems, pseudo-inverse, regularization, random projection, bias, variance

ACM Classification Keywords: I.5.4 Signal processing, I.6 SIMULATION AND MODELING (G.3), G.1.9 Integral Equations

Link:

USING RANDOMIZED ALGORITHMS FOR SOLVING DISCRETE ILL-POSED PROBLEMS

Elena G. Revunova, Dmitri A. Rachkovskij

http://foibg.com/ijita/vol16/IJITA16-2-p06.pdf

SOME APPROACHES FOR SOFTWARE SYSTEMS ANALYSES AND ITS UPGRADE PREDICTION
By: Igor Karelin  (3255 reads)
Rating: (1.00/10)

Abstract: This paper proposes and discusses a new type of tool for Linux based software systems analysis, testing and optimization as well as the new approach which is based on this tool and will help to define the best moment for executing of effective and smart software upgrade in automatic mode on active and online system. The best moment means one when the software upgrade will cause the minimal services losses. The presented tool is called CAP (Characterization of Availability and Performance) and provides engineers with an instrument for systems performance tuning, profiling and issues investigation as well as stress and load testing. The described system operates with over 150 Linux based system parameters to optimize over 50 performance characteristics. The paper discusses the CAP tool’s architecture, multi-parametric analysis algorithms, and application areas. Furthermore, the paper presents possible future work to improve the tool and extend it to cover additional system parameters and characteristics. The prediction of the software upgrade (SU) best moment mentioned above is supposed to be made on basis of performance and availability statistics got from the CAP tool.

Keywords: Software Systems Analyses, Linux Servers, Telecommunication System, Performance, Availability, Serviceability, Software Upgrade Prediction.

ACM Classification Keywords: C.4 Computer Systems Organization - Performance of Systems - Reliability, availability, and serviceability.

Link:

SOME APPROACHES FOR SOFTWARE SYSTEMS ANALYSES AND ITS UPGRADE PREDICTION

Igor Karelin

http://foibg.com/ijita/vol16/IJITA16-2-p05.pdf

PROGRAMMING OF AGENT-BASED SYSTEMS
By: Dmitry Cheremisinov, Liudmila Cheremisinova  (3891 reads)
Rating: (1.00/10)

Abstract: The purpose of the paper is to explore the possibility of applying the language PRALU, proposed for description of parallel logical control algorithms and rooted in the Petri net formalism for design and modeling real-time multi-agent systems. It is demonstrated with a known example of English auction on how to specify an agent interaction protocol using considered means. A methodology of programming agents in multi-agent system is proposed; it is based on the description of its protocol on the language PRALU of parallel algorithms of logic control. The methodology consists in splitting agent programs into two parts: the block of synchronization and the functional block.

Keywords: multi-agent system, interaction protocol, BDI agent, parallel control algorithm.

ACM Classification Keywords: I.2.11 Computer Applications; Distributed Artificial Intelligence, Multiagent systems; D.3.3 Programming Languages: Language Constructs and Features – Control structures, Concurrent programming structures

Link:

PROGRAMMING OF AGENT-BASED SYSTEMS

Dmitry Cheremisinov, Liudmila Cheremisinova

http://foibg.com/ijita/vol16/IJITA16-2-p04.pdf

OBJECT LEVEL RUN-TIME COHESION MEASUREMENT
By: Varun Gupta, Jitender Kumar Chhabra  (3685 reads)
Rating: (1.00/10)

Abstract: Most of the object-oriented cohesion metrics proposed in the literature are static in nature and are defined at the class level. In this paper, new dynamic cohesion metrics are proposed which provide scope of cohesion measurement up to the object level and take into account important and widely used object-oriented features such as inheritance, polymorphism and dynamic binding during measurement. The proposed dynamic measures are computed at run-time, which take into consideration the actual interactions taking place among members of a class. The proposed measures are evaluated using a theoretical framework to prove their usefulness. A dynamic analyzer tool is presented which can be used to perform dynamic analysis of Java applications for the purpose of collecting run-time data for the computation of the proposed metrics. Further, a case study is conducted using a Java program to demonstrate the computation process for the proposed dynamic cohesion measures.

Keywords: Cohesion; Software metrics; Static metrics; Dynamic metrics; Aspect oriented programming; Object-oriented software.

ACM Classification Keywords: D.2.8 Software Engineering: Metrics; D.2.3 Software Engineering: Coding Tools and Techniques - Object-oriented programming

Link:

OBJECT LEVEL RUN-TIME COHESION MEASUREMENT

Varun Gupta, Jitender Kumar Chhabra

http://foibg.com/ijita/vol16/IJITA16-2-p03.pdf

INVESTIGATION ON COMPRESSION METHODS USED AS A PROTECTION INSTRUMENT ...
By: Dimitrina Polimirova, Eugene Nickolov  (4110 reads)
Rating: (1.00/10)

Abstract: This report examines important issues related to different ways to influence the information security of file objects, subjected to information attacks by the methods of compression. Accordingly, the report analyzes the relationships that may exist between selected set of known by the moment of exploration attacks, methods and objects. A methodology for evaluation of information security of objects exposed to attacks is proposed. This methodology reports the impact of different methods of compression which can be applied to these objects. A coefficient of information security for each relation attack—method—object is created. It depends on two main parameters TIME and SIZE, which describe, respectively, the attack and the object. The parameters are presented as two separate relations variables before and after the impact of methods of compression. Since one object can be processed by more than one method of compression, different criteria and methods are used for evaluating and selecting the best method of compression with respect to the information security of the investigated objects. An analysis of the obtained results is made. On this basis are made recommendations for choosing methods of compression with the lowest risk with respect to the information security of file objects, subjected to information attacks.

Keywords: Information Security, Information Attacks, Methods of Compression, File Objects, Co-efficient of Information Security, Risk Assessment.

ACM Classification Keywords: D.4.6 Security and Protection: information flow controls

Link:

INVESTIGATION ON COMPRESSION METHODS USED AS A PROTECTION INSTRUMENT OF FILE OBJECTS

Dimitrina Polimirova, Eugene Nickolov

http://foibg.com/ijita/vol16/IJITA16-2-p02.pdf

METHODS FOR AUTOMATED DESIGN AND MAINTENANCE OF USER INTERFACES
By: Valeriya Gribova  (3492 reads)
Rating: (1.00/10)

Abstract: Methods for automated development and maintenance of intellectual software user interfaces are proposed. They rest on an ontology-based approach and intended for decreasing effort and time for user interface development and maintenance. A survey, principal conception of the approach, project components and methods of automated development of the project as well as comparison the methods with their analogues are described.

Keywords: ontology, interface project, automated generation

ACM classification: I.2.2 Automatic Programming

Link:

METHODS FOR AUTOMATED DESIGN AND MAINTENANCE OF USER INTERFACES

Valeriya Gribova

http://foibg.com/ijita/vol16/IJITA16-2-p01.pdf

TRANSLITERATION AND LONGEST MATCH STRATEGY
By: System Administrator  (3350 reads)
Rating: (1.00/10)

Abstract: A natural requirement on transliteration systems is that transliteration and its inversion could be easily performed. To make this requirement more precise, we consider a text transduction as easily performable if it can be accomplished by a finite transducing device such that all successful tokenizations of input words are compliant with the left-to-right longest-match strategy. Applied to inversion of transliteration this gives a convenient sufficient condition for reversibility of transliteration.

Keywords: left to right, longest match, transliteration, reversible transliteration, sequential transducer.

ACM Classification Keywords: E.4 Data: Coding and information theory — Formal models of communication

Link:

TRANSLITERATION AND LONGEST MATCH STRATEGY

Dimiter Skordev

http://foibg.com/ijita/vol16/IJITA16-1-p07.pdf

INCREASING RELIABILITY AND IMPROVING THE PROCESS OF ACCUMULATOR CHARGING ...
By: Irena Nowotyńska, Andrzej Smykla  (3337 reads)
Rating: (1.00/10)

Abstract: The article presents the software written in Builder C++ that monitors the process of processor impulse charger. Protocol, interface, components used and the future research are presented.

Keywords: PCgraph, developing software, charging process, C++ Builder

ACM Classification Keywords: C.3 SPECIAL-PURPOSE AND APPLICATION-BASED SYSTEMS

Link:

INCREASING RELIABILITY AND IMPROVING THE PROCESS OF ACCUMULATOR CHARGING BASED ON THE DEVELOPMENT OF PCGRAPH APPLICATION

Irena Nowotyńska, Andrzej Smykla

http://foibg.com/ijita/vol16/IJITA16-1-p06.pdf

APPLICATION OF INFORMATION THEORIES TO SAFETY OF NUCLEAR POWER PLANTS
By: Elena Ilina  (3574 reads)
Rating: (1.00/10)

Abstract: To this date, strategies aiming at a safe operation of nuclear power plants focused mainly on the prevention of technological breakdowns and, more recently, on the human attitudes and behaviors. New incidents and challenges to safety, however, motivated the nuclear community to look for a new safety approach. The solution became a strong focus on knowledge management and associated theories and sciences as information theories, artificial intelligence, informatics, etc. In all of these, the fundamental role is played by a category of information. This work reviews a number of information interpretations and theories, among which of great relevance are those capturing the fundamental role information plays as a mean to exercise control on the state of a system, those analyzing information communication between agents involved in safety-related activities, and, finally, those which explore the link between information and the limits of our knowledge. Quantitative measures of information content and value are introduced. Completeness, accuracy, and clarity are presented as attributes of information acquired by the receiver. To conclude, suggestions are offered on how to use interpretations and mathematical tools developed within the information theories to maintain and improve safety of nuclear power plants.

Link:

APPLICATION OF INFORMATION THEORIES TO SAFETY OF NUCLEAR POWER PLANTS

Elena Ilina

http://foibg.com/ijita/vol16/IJITA16-1-p05.pdf

[prev]  Page: 46/66  [next]
1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66 
World Clock
Powered by Tikiwiki Powered by PHP Powered by Smarty Powered by ADOdb Made with CSS Powered by RDF powered by The PHP Layers Menu System
RSS Wiki RSS Blogs rss Articles RSS Image Galleries RSS File Galleries RSS Forums RSS Maps rss Calendars
[ Execution time: 0.15 secs ]   [ Memory usage: 7.56MB ]   [ GZIP Disabled ]   [ Server load: 0.38 ]