Disertación/Tesis

Clique aqui para acessar os arquivos diretamente da Biblioteca Digital de Teses e Dissertações da UFAL

2024
Disertaciones
1
  • TIAGO PAULINO SANTOS
  • Representation, visualization and analysis of large volumes of spatio-temporal urban public safety data

  • Líder : THALES MIRANDA DE ALMEIDA VIEIRA
  • MIEMBROS DE LA BANCA :
  • THALES MIRANDA DE ALMEIDA VIEIRA
  • RIAN GABRIEL SANTOS PINHEIRO
  • LUIS GUSTAVO NONATO
  • Data: 25-ene-2024


  • Resumen Espectáculo
  • Technological transformations arising from the digital transformation of public bodies and the internet of things have enabled the generation and collection of large volumes of data, which can later be used in precise analyses. For public security, the application of new technologies in criminal analysis, patrolling and crime repression activities also bring countless possibilities, such as being able to visualize in depth the performance of the corporation's actions, monitor criminal activity, understand patterns and seek alternatives to implement better security policies. Big Data is also no longer a distant reality for public security corporations. Tracking devices, surveillance cameras, monitoring systems, customer service systems and many other sources of information already provide a large volume of data that needs to be properly processed in order to obtain relevant knowledge. In recent years, many scientific works have proposed the use of Machine Learning algorithms to recognize spatial and temporal patterns of crimes. In this context, we propose in this work a visual analysis tool for spatio-temporal urban data, and a new algorithm for detecting crime spots. These solutions were validated using databases from the Military Police of the State of Alagoas (PMAL) in case studies where the objective was to analyze spatio-temporal data on crimes and patrolling. The results of this research will be important not only from a scientific point of view, but can also be used by PMAL to optimize its decision-making processes.

2
  • MATHEUS MACHADO VIEIRA
  • Metaheuristics for the Knapsack Problem with Forfeits

  • Líder : RIAN GABRIEL SANTOS PINHEIRO
  • MIEMBROS DE LA BANCA :
  • BRUNO COSTA E SILVA NOGUEIRA
  • Dimas Cassimiro do Nascimento Filho
  • ERICK DE ANDRADE BARBOZA
  • RIAN GABRIEL SANTOS PINHEIRO
  • Data: 31-ene-2024


  • Resumen Espectáculo
  • The knapsack problem is among one the most well-known and studied optimization problems. Its potential applications make it a good model for a real-life problem. In this work, the Knapsack Problem with Forfeits will be addressed. In this variant, pairs of conflicting items in the solution imply a penalty. Given a set of items and a conflict graph, the objective is to find a collection of items that respect the knapsack's capacity and maximizes the total value of the items minus the penalties. This problem can be applied from the organization of the workforce on the shop floor to investment decision problems. This work proposes the construction of a new method for the problem using well-established tools, the ILS meta-heuristic, and the VND local search heuristic, and there is also the possibility of verifying other heuristics and meta-heuristics. Preliminary results have already been obtained, showing that the new method surpasses others approaches. It can be expected that this work offers an indication of the path that can be taken to refine the resolution of the problem, presenting analyzes and comparisons between methods applied to the problem.

     

     



     

3
  • RENDRIKSON DE OLIVEIRA SOARES
  • A Computational Architecture Based on Blockchain Technology to Enable Transparency of Audience Services and Payments on Streaming Platforms

  • Líder : ANDRE MAGNO COSTA DE ARAUJO
  • MIEMBROS DE LA BANCA :
  • ANDRE MAGNO COSTA DE ARAUJO
  • FABIO JOSE COUTINHO DA SILVA
  • PAULO CAETANO DA SILVA
  • Data: 26-feb-2024


  • Resumen Espectáculo
  • Blockchain technology enables secure recording and sharing of information without a central authority, using encryption across a distributed network of computers. Widely used in the software industry, it offers authenticity and security features in online transactions. Streaming platforms, such as YouTube and Spotify, among others, are examples of systems that deal with the transmission of data of different types, such as videos, music, and podcasts, and that need to offer transparency, traceability, and security features when handling this data in software systems. In the digital entertainment industry, a research gap was identified in the state of the art concerning the application of Blockchain technology to enable transparency in audience services and payments on streaming. This work proposes inserting a Blockchain layer into the existing operational infrastructure of streaming platforms available on the market. This layer manages individual content information and verifies and audits transactions associated with each digital media. Furthermore, it allows the transfer of monetary resources through the Blockchain network. A software architecture was created to achieve the proposed objective containing the components, interfaces, relationships, and constraints of the functional requirements representing this application domain. In this process, an architecture specification model, the C4 model, was adopted due to its understandable documentation characteristic for both technical and non-technical audiences. After specifying the architecture, the fundamental components for testing and operating the solution proposed in this research were implemented. In this way, middleware was developed to capture and manage the audience mechanisms for each piece of content individually. This occurs through an automatic smart contract generation service executed when content is available and consumed on streaming platforms. Subsequently, a web application was developed that simulates the operations of a streaming platform to integrate with the middleware developed, allowing tests to be carried out, both financial and functional. The tests were conducted on three different Blockchain networks, revealing the technical feasibility of the proposed solution. This was achieved through the interception and automatic management of audience and payment information using smart contracts. Furthermore, the financial viability of the implementation was analyzed, resulting in an average cost of US$0.000518 on one of the Blockchain networks. Additionally, the research involved content creators with monetized channels and software developers specializing in Blockchain. The objective of consulting these professionals was to evaluate and obtain feedback on the new software architecture and the feasibility of the proposed solution. Thus, the results included feedback that validated the implementation of the computational solution, confirming the platform's operationalization by content creators on streaming platforms.

4
  • GEAN DA SILVA SANTOS
  • USE OF INFORMATION THEORY MEASURES EXTRACTED FROM OBD-II INTERFACE DATA FOR DRIVER IDENTIFICATION

  • Líder : ANDRE LUIZ LINS DE AQUINO
  • MIEMBROS DE LA BANCA :
  • ANDRE LUIZ LINS DE AQUINO
  • FABIANE DA SILVA QUEIROZ
  • RAQUEL DA SILVA CABRAL
  • OSVALDO ANIBAL ROSSO
  • DENIS LIMA DO ROSARIO
  • Data: 25-mar-2024


  • Resumen Espectáculo
  • Vehicles have more and more built-in sensors, such sensors are interconnected in an internal network called CAN and their values can be accessed through the OBD-II interface. This allows a large amount of data from different variables from the act of driving.
    Several works have proposed applications that benefit from the availability of these data. Most applications fall into one of the following problems: classification, grouping, prediction. In general, the works perform the following process: data extraction, cleaning and transformation data, training model, evaluation model.
    In this work it is proposed to extract measures from Information Theory to add data to pro- cess. With this, it is intended to have new values to add or replace the data pre-processing and then evaluate the performance of the model. For evaluation, it is intended to analyze two applications: driver identification (classification problem) and pollutant gas emission prediction (prediction problem).
    Preliminary results were obtained for driver identification. An experiment was done for a small and a large amount of data. For the small amount of data, the process according to the literature proved to be superior in most classification algorithms in relation to the proposed process. For the large amount of data, the difference between the processes became small, however with the process according to the literature being slightly superior with the exception of the Naive Bayes algorithm (which performed better with the proposed process, but had lower accuracy than the other classifiers).
    The next step is to compare the proposed process and the one commonly used in the litera- ture for a prediction problem: prediction of polluting gas emissions.

2023
Disertaciones
1
  • MARIA JOSÉ DOS SANTOS TAKESHITA
  • The effects of gender-stereotypes gamified environments on self-efficacy, engagement, and students' performance: a quantitative and qualitative analysis

  • Líder : IG IBERT BITTENCOURT SANTANA PINTO
  • MIEMBROS DE LA BANCA :
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • IG IBERT BITTENCOURT SANTANA PINTO
  • JARIO JOSE DOS SANTOS JUNIOR
  • LEONARDO BRANDAO MARQUES
  • MARCELO REIS
  • Data: 04-abr-2023


  • Resumen Espectáculo
  • The use of gamification, it should be highlighted, when widely studied and applied in the educational area, allows teaching to be conducted in an innovative way, thus allowing introduction of new resources in education and helping teachers and students in the learning process. Gamification has been widely applied to increase student engagement and motivation, and is often used online. Despite the clear benefits, studies show that negative effects can be found in engagement, especially when students are exposed to gender-stereotyped gamified environments. A pertinent question then arises: How to increase the engagement and performance of students in the activities performed, using a Boost-type message (i.e., positive message) in a gamified environment? Consequently, increasing performance in the implemented activities? In order to seek evidence to answer these questions, a study was conducted to investigate whether the gender stereotype with the use of Boost-type messages in gamified online educational environments can actually motivate students' self-efficacy and performance. The experiment was divided into three stages: (1) a questionnaire about their self-efficacy, followed by (2) a hypothetical gamified online system with logic questions where they could or could not receive a positive message (Boost) accordingly with their gender. Subsequently, its self-efficacy was reassessed (3), thus verifying if any changes could be traced. Under the prism of the observed factors, results indicated that female participants had better engagement results when using a female impulse message (stFemale) environment was introduced. Nonetheless, male participants presented significantly higher results in relation to self-efficacy and performance, thus raising the need for qualitative analysis of the results presented regarding female participants' self-efficacy and performance.

2
  • LUCIANO JULIO DOS SANTOS
  • MONITORING VOLTAGE UNBALANCE AND REVERSE CURRENT IN THE LOW-VOLTAGE DISTRIBUTION GRID: A PROPOSAL FOR A COMPUTER SYSTEM USING THE PUBLIC LIGHTING INFRASTRUCTURE

     
  • Líder : ERICK DE ANDRADE BARBOZA
  • MIEMBROS DE LA BANCA :
  • RONALDO RIBEIRO BARBOSA DE AQUINO
  • ANDRE LUIZ LINS DE AQUINO
  • ERICK DE ANDRADE BARBOZA
  • IGOR CAVALCANTE TORRES
  • Data: 23-may-2023


  • Resumen Espectáculo
  • Brazil’s and the world’s energy matrices are in an energy generation decentralization process due to the Distributed Generation (DG) expansion resulting in the energy generated through Photovoltaic Panels (PP) installed on the roofs of residences in urban areas. This change has made the energy quality monitoring for the Low Voltage Distribution Networks (LVDN) one of the Distribution System Operators (DSO) biggest challenges, who are only aware of the energy quality levels through the customers’ complaints to the distributors. There are few projects implementing systems for this purpose, possibly due to the high investment required for this technology. Therefore, this research proposes a cyber-physical system for monitoring the direction of electric current flow and voltage imbalances in an RDBT using the street lighting infrastructure. The proposed system will provide the DSO’s with real-time information on the power quality in the RDBT through a low-cost data acquisition model using the street lighting infrastructure through Telemanagement Relays (TR). The system was empirically validated by monitoring a street where photovoltaic generation systems stochastically connected to RDBT inject power. The results show that the proposed system can identify the direction of the current in the RDBT and estimate the voltage unbalance with an average absolute error of 0.096% compared to the professional power analyzer. The results of this project can make relevant contributions to the challenge in power quality monitoring; for example, DSO’s can use the proposed system to access real-time information relative to the RDBT through a low-cost data acquisition model. It is expected that the results of this project would bring relevant contributions to the challenge of monitoring the levels of power quality, thus collaborating for a more orderly expansion of the distribution of energy production in Brazil.

     
3
  • ANDRE MOABSON DA SILVA RAMOS
  • Code Smells Detection Across Programming Languages

  • Líder : BALDOINO FONSECA DOS SANTOS NETO
  • MIEMBROS DE LA BANCA :
  • WESLEY KLEWERTON GUEZ ASSUNÇÃO
  • BALDOINO FONSECA DOS SANTOS NETO
  • MARCIO DE MEDEIROS RIBEIRO
  • RAFAEL MAIANI DE MELLO
  • Data: 30-jun-2023


  • Resumen Espectáculo
  • During software development, the presence of code smells has been related to the degradation of software quality. Several studies present the importance of detecting smells in the source code and to apply refactoring. However, the existing approaches for detecting code smells are limited for certain programming languages. In this context, this work aims to extend the techniques of code smell detection using transfer learning by comparing the results of models built from two neural network architectures. For our study, we selected five programming languages that are among the 10 most used languages according to a survey conducted by StackOverflow in 2021: Java, C#, C++, Python and JavaScript. The results indicated that when applying transfer learning, the models were able to classify well the snippets of smelled code in other languages with the exception of the Python model, regardless of the model's architecture. These results can help developers and researchers to apply the same code smell detection strategies in different programming languages and use models and datasets that we make available. 

4
  • BRUNO FERREIRA BARBOSA ROCHA

  • A QUALITY MANAGEMENT MODEL FOR PUBLIC SECTOR CROWDSOURCING PROJECTS

  • Líder : ALAN PEDRO DA SILVA
  • MIEMBROS DE LA BANCA :
  • ALAN PEDRO DA SILVA
  • RANILSON OSCAR ARAUJO PAIVA
  • FERNANDO SILVIO CAVALCANTE PIMENTEL
  • RAFAEL FERREIRA LEITE DE MELLO
  • Data: 10-jul-2023


  • Resumen Espectáculo
  • Public participation can be seen as a logical extension of the democratic process, as explained by Brabham [2009]. Within this aspect arises Crowdsourcing (a neologism of “crowd” – crowd and “outsourcing” – outsourcing) which, according to Howe [2009] is the act of offering a job (generally performed by a person, an employee or a contracted company). in an open call (“opencall”) for the participation of a group of people. Crowdsourcing can play an effective role in public administration within the public service delivery system, in the perceptions of government authorities and citizens, and in the context with respect to innovative and modernized management practices Sumra and Bing [2016]. Crowdsourcing, in short, is a general-purpose problem-solving method, which uses a group of participants willing to help solve a proposed problem Neto and Santos [2018]. Innovation tournaments, prizes for solving an engineering problem, or online payment to participants for categorizing images are examples of crowdsourcing Ranard et al. [2013]. Brabham Brabham [2009], in his study on Crowdsourcing project planning, argues that it is a model capable of aggregating talents, leveraging ingenuity and at the same time reducing costs and time that were previously needed to solve problems, in line with with the understanding of Koch et al. [2011] which addresses that crowdsourcing and co-creation platforms have changed the way companies implement open innovation.

     
    Ícone "Verificada pela comunidade"
     
5
  • ANDRESSA MARTINS OLIVEIRA
  • MOBILE ROBOTIC TELEOPERATION SYSTEM

  • Líder : ICARO BEZERRA QUEIROZ DE ARAUJO
  • MIEMBROS DE LA BANCA :
  • ALLAN DE MEDEIROS MARTINS
  • ICARO BEZERRA QUEIROZ DE ARAUJO
  • TIAGO FIGUEIREDO VIEIRA
  • Data: 19-jul-2023


  • Resumen Espectáculo
  • Despite the great advances in autonomous robots, some activities still require human supervision due to several barriers encountered, so the development of telerobotic systems has emerged in several areas of activity. This work proposes the development of a mobile robot teleoperation system, capable of acting in several teleoperated robot control architectures, implementing shared and supervisory control from the ROS framework and its tools for communication between the operator and the machine. .

6
  • CRISTÓVÃO DA SILVA RODRIGUES COSTA
  • Micro actions detection from facial videos for cognitive load analysis in multimedia learning environments

  • Líder : THALES MIRANDA DE ALMEIDA VIEIRA
  • MIEMBROS DE LA BANCA :
  • BRUNO ALMEIDA PIMENTEL
  • DIEGO CARVALHO DO NASCIMENTO
  • THALES MIRANDA DE ALMEIDA VIEIRA
  • Data: 31-jul-2023


  • Resumen Espectáculo
  • The technological evolution of recent years, accelerated by the Covid-19 pandemic, has been responsible for a rapid and continuous paradigm shift in education, with the adoption of multimedia learning environments. During the learning process in these environments, the absorption of content by students decreases as the volume of information transmitted to them increases. In other words, there is a cognitive overload in one or both visual and verbal channels. Currently, there is a scarcity of studies that use Computer Vision and Data Science tools for the analysis of Cognitive Load. Tools of this nature would enable automated analysis of large volumes of videos and, consequently, the evaluation and generation of multimedia content that optimize student learning. In this work, a pilot study was conducted with a sample of 13 students from the School of Medicine at the Universidad de Atacama, Chile. Thus, a methodology was developed to extract and investigate correlations between visual characteristics of the students' faces and cognitive load. A database of facial videos of students watching multimedia-enhanced lectures on their computer screens was used. This video database was initially organized and preprocessed, followed by the application of Deep Learning models to extract visual points of interest from the face in each frame. Micro-actions were previously annotated by the researcher, and the resulting data were evaluated to identify relevant patterns related to cognitive load using unsupervised and supervised machine learning algorithms. In addition to addressing the main investigation of this research, the results of this study include a proof of concept for analyzing the correlation of facial expressions with individual exam scores, for further analysis of cognitive load in multimedia learning environments.

7
  • DURVAL PEREIRA CESAR NETO
  • Understanding harmful code through transfer learning

  • Líder : BALDOINO FONSECA DOS SANTOS NETO
  • MIEMBROS DE LA BANCA :
  • BALDOINO FONSECA DOS SANTOS NETO
  • LEOPOLDO MOTTA TEIXEIRA
  • MARCELO COSTA OLIVEIRA
  • Data: 28-ago-2023


  • Resumen Espectáculo
  • Code smells are indicators of poor design implementation and decision-making that can potentially harm the quality of software. Therefore, detecting these smells is crucial to prevent such issues. Some studies aim to comprehend the impact of code smells on software quality, while others propose rules or machine learning-based approaches to identify code smells. Previous research has focused on labeling and analyzing code snippets that significantly impair software quality using machine learning techniques. These snippets are classified as Clean, Smelly, Buggy, and Harmful Code. Harmful Code refers to Smelly code segments that have one or more reported bugs, whether fixed or not. Consequently, the presence of a Harmful Code increases the risk of introducing new defects and/or design issues during the remediation process. While generating useful results for harmful code detection, none of the prior work has considered through the use of transfer learning, training a model to identify harmful snippets in one programming language and being able to identify similar harmfulness in another programming language. We perform our study on this scope with five smell types, 803 258,035 versions of 23 open-source projects, 8,181 bugs, and 11,506 code smells. The findings revealed promising transferability of knowledge between Java and C# in the presence of various code smells types, while C++ exhibited more challenging transferability. Also, our study discovered that a sample size of 32 demonstrated favorable outcomes for most harmful codes, underscoring the efficiency of transfer learning even with limited data.

8
  • WELLIGNTON BATISTA DA SILVA
  • RECOMMENDATION OF A MACHINE LEARNING MODEL FOR PREDICTING CARDIOVASCULAR RISK WITH METABOLIC SYNDROME BIOMARKERS AND FRAMINGHAM SCORE

  • Líder : RAFAEL DE AMORIM SILVA
  • MIEMBROS DE LA BANCA :
  • ALMIR PEREIRA GUIMARAES
  • BRUNO ALMEIDA PIMENTEL
  • RAFAEL DE AMORIM SILVA
  • RAFAEL FERREIRA LEITE DE MELLO
  • RANILSON OSCAR ARAUJO PAIVA
  • Data: 30-ago-2023


  • Resumen Espectáculo
  • The prediction of cardiovascular events in patients diagnosed with Metabolic Syndrome (MS)
    is a topic of great relevance to the field of Health in general and essential for Endocrinology.
    This dissertation aims to recommend a Machine Learning (ML) model for estimating the risks
    of cardiovascular events in patients with MS, exploring the markers of the Framingham Risk
    Score (FRS) and MS. Methodologically, we used a logistic regression model and analyses with
    decision tree, random forest, gradient boosting, support vector machine, and k-nearest neighbors
    to test our central hypothesis that biomarkers (variables related to MS) have a positive, strong,
    and significant impact on cardiovascular events in patients with MS. Technically, the research
    was conducted through experiments performed in different scenarios. In the first scenario, an
    algorithm was developed to assess cardiovascular risk in patients with and without MS. In
    subsequent scenarios, patients with and without MS were analyzed, considering the markers of
    MS and FRS as dependent variables, while the Metabolic Syndrome condition was adopted as an
    independent variable. In the fifth scenario, an analysis was performed to select the most suitable
    regression and classification model for predicting cardiovascular risk in a combined dataset of
    heart diseases. In the sixth scenario, the developed model was based on the Framingham Global
    Risk Score (FRS), incorporating the markers of MS into the experiments. Data were obtained
    from the National Center for Health Statistics (NHANES) repository, a combined dataset of heart
    diseases from the UCI Machine Learning Repository and the Kaggle platform. In summary, the
    main findings of this dissertation are as follows: (1) In the first scenario, a percentage difference
    of 81.74% was observed in the mean CVD risk between populations with and without Metabolic
    Syndrome, showing a significant increase in cardiovascular risk in the population with MS; (2)
    In subsequent scenarios, the Random Forest (RF) model stood out, achieving high accuracy
    in all variable combinations, especially when combining the markers of MS with the gender
    marker; (3) In the fifth scenario, the RF model was identified as the most suitable, highlighting
    the importance of variables related to MS in predicting cardiovascular risk and emphasizing the
    need for improvements in models for better identification of positive cases; (4) Both the model
    with three MS markers and the model with five MS markers combined with the Framingham
    Risk Score (MS + FRS) showed considerable performance, with significant correlations and
    accuracies (0.80 and 0.84, respectively). These simpler combinations of variables can be an
    interesting approach as they provide relevant information for cardiovascular risk prediction in a
    less invasive manner, avoiding the need for more complex tests.

9
  • JULIANO ROCHA BARBOSA
  • Evaluating the Resilience of Cloud NLP Services across Amazon, Microsoft, and Google

  • Líder : BALDOINO FONSECA DOS SANTOS NETO
  • MIEMBROS DE LA BANCA :
  • BALDOINO FONSECA DOS SANTOS NETO
  • LEOPOLDO MOTTA TEIXEIRA
  • MARCELO COSTA OLIVEIRA
  • MARCIO DE MEDEIROS RIBEIRO
  • Data: 30-ago-2023


  • Resumen Espectáculo
  • Natural Language Processing (NLP) has revolutionized industries, streamlining customer service through applications in healthcare, finance, legal, and human resources domains, and simplifying tasks like medical research, financial analysis, and sentiment analysis. To avoid the high costs of building and maintaining NLP infrastructure, companies turn to Cloud NLP services offered by major cloud providers like Amazon, Google, and Microsoft. However, there is little knowledge about how resilient these services are when subjected to noise. This paper presents a study that analyzes the resilience of Cloud NLP services by evaluating the effectiveness of sentiment analysis services provided by Amazon, Google, and Microsoft when subjected to 12 types of noise, including syntactic and semantic noises. The findings indicate that Google is the most resilient to syntactic noises, and Microsoft is the most resilient to semantic noises. These findings may help developers and companies in selecting the most suitable service provider and shed light towards improving state-of-the-art techniques for effective cloud NLP services.

10
  • MARIANA SILVA GOIS DE ALMEIDA
  • The use of artificial intelligence to create biometric formulas for the calculation of the intraocular lens.

  • Líder : AYDANO PAMPONET MACHADO
  • MIEMBROS DE LA BANCA :
  • AYDANO PAMPONET MACHADO
  • EVANDRO DE BARROS COSTA
  • EDILEUZA VIRGINIO LEÃO
  • JOÃO MARCELO DE ALMEIDA GUSMÃO LYRA
  • RAFAEL FERREIRA LEITE DE MELLO
  • Data: 31-ago-2023


  • Resumen Espectáculo
  • Purpuse: To elaborate a new biometric formula based on a database of the Brazilian population and comparison with the performance of other 6 formulas existing in the scientific community. Location: Participants were evaluated and underwent surgery at an ophthalmological hospital in Brasília, DF, Brazil. Design: A retrospective clinical study to compare the performance of biometric formulas. Methodology: The performances of machine learning models and Barrett Universal II, Haigis, Hoffer Q, Holladay 1, Kane and SRK/T formulas were analyzed by comparing the median absolute error and the percentage of eyes with less than 0, 5 and 1.0D of absolute error. Results: 1526 single eyes were analyzed, where the Barrett Universal II, Haigis, Hoffer Q, Holladay 1, Kane and SRK/T formulas and the MLP and SVR models obtained, respectively, the following median absolute error values: 0.393, 0.426, 0.427, 0.387, 0.409, 0.410, 0.405 and 0.370 D. The performances of the SVR model and the Holladay 1 and Barrett Universal II formulas were also superior in all studied subgroups. Conclusion: The proposed computational model-based formula obtained superior results compared to other formulas for calculating intraocular lens power in this population and in all studied biometric subgroups.

11
  • CLEBERSON DOS SANTOS MACHADO
  • 83 / 5,000
     

    Translation results

    Optimization of the Operational Mode of Pulsatile Ventricular Assist Devices
     
     
  • Líder : THIAGO DAMASCENO CORDEIRO
  • MIEMBROS DE LA BANCA :
  • THIAGO DAMASCENO CORDEIRO
  • ALVARO ALVARES VDE CARVALHO CESAR SOBRINHO
  • Lenardo Chaves e Silva
  • Data: 05-sep-2023


  • Resumen Espectáculo
  • Cardiovascular diseases represent a large part of the causes of death in Brazil. In severe cases, mechanical pumps called ventricular assist devices (VADs), designed to extend patients' life expectancy, can save lives. This work proposes using computer simulation to evaluate the performance of a coupled model of the human cardiovascular system with a pulsatile VAD to find the optimal combination between the ejection pressure and the ejection instant to improve the response physiology of patients. As a starting point, techniques available in the literature but for rotating devices were analysed. Finally, the computer simulation results were analysed, and it was concluded that it is possible to choose an optimal combination that meets all the physiological demands of the main hemodynamic variables.

     
     
12
  • ELISANGELA MARTINS DO NASCIMENTO
  • Investigating the relationship between Grit, Learning, Flow Experience in Stereotyped Gamified Educational Environments

  • Líder : IG IBERT BITTENCOURT SANTANA PINTO
  • MIEMBROS DE LA BANCA :
  • IG IBERT BITTENCOURT SANTANA PINTO
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • SHEYLA CHRISTINE SANTOS FERNANDES
  • GEISER CHALCO CHALLCO
  • MARCELO REIS
  • Data: 13-oct-2023


  • Resumen Espectáculo
  • This dissertation is composed of three studies that aim to explore the influence of gamification,
    which refers to the integration of game elements and mechanics in non-game contexts, such as
    educational environments, on student performance and engagement and the impact of gender
    stereotypes. , correlating them with the Grit personality trait. This trait is characterized by a
    combination of passion and perseverance towards long-term goals, which encompasses qualities
    such as resilience, determination and the ability to maintain effort and interest in the face of
    challenges and setbacks. Given this, three studies were conducted, the first study is a systematic
    review of the literature that investigates the correlation between Grit and gamified systems,
    suggesting that gamification positively influences student engagement and performance. The
    second study focuses on gender differences in stereotypical gamified tutoring systems, suggesting
    that Grit alone may not be enough to reduce these disparities, indicating in the results that
    gender stereotypes can negatively affect performance and engagement for women with low levels
    of Grit. The third study is a qualitative analysis that explores participants’ emotional perceptions
    in a stereotyped gamified environment, identifying negative feelings of lack of control. Overall,
    the findings highlight the need for equitable and carefully designed gamified environments to
    avoid negative effects on student performance and engagement. Future research should consider
    a holistic approach, investigating complex correlations between personality traits, engagement
    methods and academic performance, and the impact of gender stereotyping. Future studies
    should also guide the development of teaching and learning methodologies.

13
  • JOÃO VITOR LOURENÇO BATISTA DO NASCIMENTO
  • Stereotypes in Gamified Educational Environments: An analysis of gender and race

  • Líder : IG IBERT BITTENCOURT SANTANA PINTO
  • MIEMBROS DE LA BANCA :
  • IG IBERT BITTENCOURT SANTANA PINTO
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • IVANDERSON PEREIRA DA SILVA
  • JARIO JOSE DOS SANTOS JUNIOR
  • FLÁVIA MARIA SANTORO
  • Data: 30-oct-2023


  • Resumen Espectáculo
  • Due to the COVID-19 pandemic, educational technologies are becoming increasingly common in learning environments, especially in the context of undergraduate education. Several authors discuss problems that are strongly related to digital educational technologies, with some of them not directly related to computer science, such as stereotypes, which are multidisciplinary issues to be addressed in interface with social psychology. The problem of stereotyped digital educational technologies lies in the possibility that thousands of students may have their academic performance negatively affected by racial and gender stereotypes. This dissertation work discusses the influence of racial and gender stereotypes in gamified online educational environments used by undergraduate students. The overall objective of the research is to understand the effects of these stereotypes present in the development of a gamified educational environment, analyzing three psychological constructs that affect performance and learning: Flow, Self-handicapping, and Anxiety. To achieve this goal, we will conduct experiments. As a result of this research, we find that, when examining the influence of gender stereotypes, they negatively impact the levels of flow and performance of women when in gender-opposite environments. This influence did not affect men significantly, as they continued to have higher ratings despite being in a stereotype-threat condition. On the other hand, when examining the influences of racial stereotypes on the levels of flow, anxiety, and performance of undergraduate students, we found that there was indeed an increase in anxiety and flow, and that there were differences in the performance of students, especially between white male nonaffirmative action students and black female affirmative action students, but these differences were not statistically significant between the groups. We also infer from the second study that there is an influence of psychological mediators on student performance. Inspired by the results of these studies, an article proposing a methodology called the kaleidoscopic perspective was produced, which presents a proposal to produce technology designs that consider the social nuances inherent to the target subjects of the solutions. The relevance of the research lies in the possibility of contributing to the areas of computer science in education, the development of digital educational technologies, and psychology in understanding the influence of stereotypes. Furthermore, this work contributes to thinking about the development of educational technologies from a perspective that takes into account the intersectionality inherent to their users, making them more effective by considering the social context in which they are embedded.

14
  • ITALO RODRIGO DA SILVA ARRUDA
  • Cognitive Load in Sending Feedback from Dashboards and Learning Analytics: a controlled experiment with teachers

  • Líder : DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • MIEMBROS DE LA BANCA :
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • RAFAEL DE AMORIM SILVA
  • RANILSON OSCAR ARAUJO PAIVA
  • HELENA MACEDO REIS
  • Data: 22-nov-2023


  • Resumen Espectáculo
  • The use of Information Technologies is increasingly evident in educational environments. Online educational teaching platforms make it possible to help students and teachers in the formative mission of teaching and learning in various areas of knowledge. Among the educational platforms focused on e-learning, we have the Massive Open Online Course - MOOC, which allow an adaptive educational environment for users with the support of automated resources for recommendations, carried out by artificial intelligence techniques, with the objective of learning of students according to their usage profiles. In this sense, researchers have been increasingly interested in providing teachers with strategies to accompany students more efficiently in sending feedback in the context of MOOCs in order to use the inherently human capacities of teachers to adjust the feedbacks according to the needs of students, taking into account the Sentiment analysis in posts. However, this work focuses on investigating the perception of teachers in relation to their cognitive effort and time spent in creating and adapting feedback recommendations in a simulated educational platform. This study will compare three groups of teachers, using one of the scenarios (manual, automated and semi-automated) of monitoring a simulated educational platform. The scenarios set will be used in a random experiment, where the participating teachers will evaluate through a form what is their perception of cognitive effort and time dedicated to the creation and adequacy of feedback recommendations and monitoring of students on the platform. In addition to evaluating the Technology Acceptance Model (TAM) of the proposed scenarios as a Leaning Analitics module for MOOCs environments. The results are expected to validate the hypotheses raised: that the use of artificial intelligence in the automated scenario influences the perception of teachers, leading them to present a lower workload than the manual scenario. When evaluating teachers according to the level of feelings of students in the posts, we look for indications that the perception of mental demand in the automated scenario does better when analyzing the manual scenario. Being important contributions to understand the perception of teachers when using educational platforms in their classes.

15
  • FLAVIO YURI AQUINO DE OLIVEIRA
  • A COMPUTATIONAL MODEL FOR RECOMMENDING PERSONALIZED THERAPEUTIC TREATMENT FOR PATIENTS WITH SEPTIC

  • Líder : RAFAEL DE AMORIM SILVA
  • MIEMBROS DE LA BANCA :
  • RAFAEL DE AMORIM SILVA
  • BRUNO ALMEIDA PIMENTEL
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • ALMIR PEREIRA GUIMARAES
  • FILIPE MONTEZ COELHO MADEIRA
  • Data: 21-dic-2023


  • Resumen Espectáculo
  • Health Informatics provides resources for the generation, maintenance, and storage of necessary
    patient data through medical records built from systems. This study addresses the identified gap
    in the literature regarding technical support for physicians, specifically for the automatic and
    Machine Learning-assisted provision of medication suggestions for patients with septic
    conditions. Within the Health area, SEPSE presents itself as a clinical condition in which the
    response time for this condition is vital for the patient. If not treated efficiently and effectively,
    the patient may die in less than 24 hours. In this context, the main idea of this work is to provide
    a Machine Learning model to automatically predict therapeutics to assist the doctor in their
    decision-making and personalized for the patient. Overwhelming results were found in the
    models tested for the suggestion, although it was not possible to validate its efficiency and
    effectiveness by applying it to real patients at this time. We see that it is a potentially promising
    path and that it can generate good results, mainly impacting the mitigation of deaths resulting
    from SEPSE

2022
Disertaciones
1
  • LUIS FELIPE VIEIRA SILVA
  • A Variable Gain Physiological Controller for Rotary Ventricular Assist Devices

  • Líder : THIAGO DAMASCENO CORDEIRO
  • MIEMBROS DE LA BANCA :
  • ANTONIO MARCUS NOGUEIRA LIMA
  • ICARO BEZERRA QUEIROZ DE ARAUJO
  • THIAGO DAMASCENO CORDEIRO
  • Data: 11-feb-2022


  • Resumen Espectáculo
  • This work involves designing a physiological adaptive control law for a turbodynamic ventricular assist device (TVAD) using a lumped parameter time-varying model that describes the cardiovascular system. The TVAD is a rotary blood pump driven by an electrical motor. The system simulation also includes the adaptive feedback controller, which provides a physiologically correct cardiac output under different preload and afterload conditions. The cardiac output is estimated at each heartbeat, and the control objective is achieved by dynamically changing the motor speed controller's reference based on the systolic pressure error. TVADs provide support for blood circulation in patients with heart failure. Several control strategies have been developed over the years, emphasizing the physiological ones, which adapt their parameters to improve the patient's condition. In this work, a new strategy is proposed using a variable gain physiological controller to keep the cardiac output in a reference value under changes in both preload and afterload. Computational models are used to evaluate the performance of this control technique, which has shown better adaptability results than constant speed controllers and constant gain controllers.

2
  • EDUARDO MORAES DE MIRANDA VASCONCELLOS
  • Classifying Heartbeats from Electrocardiogram signals using a Siamese Convolutional Neural Network

  • Líder : THIAGO DAMASCENO CORDEIRO
  • MIEMBROS DE LA BANCA :
  • FILIPE ROLIM CORDEIRO
  • BALDOINO FONSECA DOS SANTOS NETO
  • THIAGO DAMASCENO CORDEIRO
  • Data: 24-feb-2022


  • Resumen Espectáculo
  • The Electrocardiogram (ECG) is a low-cost exam commonly used to diagnose abnormalities in the cardiac cycle, such as arrhythmias and problems in the heart’s muscle. With the advance of machine learning (ML) techniques in recent years, the automatic classification of ECG signals garnered interest in the scientific community. However, the process of annotating large and diverse datasets to support the training of ML techniques is still very time-consuming and error-prone. Thus, ML techniques whose training does not require large, well-annotated datasets are becoming even more prominent. This means that underrepresented data in ECG datasets, like rare cardiologic disturbs can still be properly identified and classified. In this work, the use of Siamese Convolutional Neural Networks, popular in imaging classification problems, to classify 12-Lead ECG heartbeats is investigated. The early results indicate the accuracy of up to 95% in a public dataset by using models composed of different combinations of similarity and loss functions. The class by class classification results are also compared with those of similar methods found in the literature, obtaining metrics on par and even exceeding them in the classification of some classes.

3
  • GLAUBER RODRIGUES LEITE
  • Framework for persistent multiagent ocean monitoring

  • Líder : HEITOR JUDISS SAVINO
  • MIEMBROS DE LA BANCA :
  • ALLAN DE MEDEIROS MARTINS
  • HEITOR JUDISS SAVINO
  • ICARO BEZERRA QUEIROZ DE ARAUJO
  • THIAGO DAMASCENO CORDEIRO
  • Data: 28-feb-2022


  • Resumen Espectáculo
  • Efficiently monitoring the ocean is essential to plan strategies that enable the health of the coast to be maintained. This is evident in environmental disaster scenarios, such as the oil spill reported in 2019 over a large stretch of the northeastern Brazilian coast, bringing environmental and socioeconomic consequences to the affected locations. The event showed the need to expand the national monitoring network, composed mainly of marine buoys, which are static or passive components in this task. This work proposes a framework for a persistent monitoring system with active sensing from autonomous collaborative vehicles involved in a maritime mission. This system manages simulations of a dispersion process and synchronizes agents on a mission, working on a patrol policy.

4
  • ARTHUR MONTEIRO ALVES MELO
  • Ambience data analysis in Intensive Care Units

  • Líder : ANDRE LUIZ LINS DE AQUINO
  • MIEMBROS DE LA BANCA :
  • ANDRE LUIZ LINS DE AQUINO
  • BRUNO COSTA E SILVA NOGUEIRA
  • HEITOR SOARES RAMOS FILHO
  • RIAN GABRIEL SANTOS PINHEIRO
  • Data: 11-mar-2022


  • Resumen Espectáculo
  • The Intensive Care Unit is an essentially artificial environment, as it concentrates critically ill patients, highly specialized professionals and state-of-the-art equipment for continuous diagnosis, treatment and monitoring. Its operating dynamics can result in an unwelcoming environment, affecting the health, well-being and performance of its occupants.
    Environmental variables are often related to various health problems, such as: Exposure to high levels of noise for long periods can result in increased blood pressure and heart rate, in addition to hearing disorders; Inadequate lighting can cause eyestrain, headache, sleep disturbances and irritation; The ambient temperature associated with relative humidity can cause dryness of the skin, eyes and throat;
    Temperature and humidity can influence the proliferation of fungi, mites, viruses and bacteria.
    Excessive noise and inadequate lighting are related to sleep disturbance in patients admitted to Intensive Care Units. This can disrupt the circadian cycle and, ultimately, potentiate the emergence of delirium: a severe state of mental confusion that makes it difficult for the patient to recover, interferes with the prognosis and can leave cognitive sequelae.
    For professionals, the environment can be an amplifier of stress and fatigue levels, contributing to the development of symptoms of Burnout syndrome. Impaired performance can lead to an increase in the error rate and culminate in poorer quality of care and impaired patient safety.
    In addition, the lack of control over the temperature and humidity of the air can affect from the operation of equipment to the quality of inputs.
    Variables such as noise, illuminance, temperature and humidity are rarely monitored together and continuously. Without this information, hospitals hardly have indicators and effective protocols to help create a productive, welcoming and humane space – the core of the current health management model.
    Therefore, this work presents an embedded system for monitoring environmental variables, collecting and evaluating data captured in Adult Intensive Care Units (ICU) and neonatal (NICU). This data consists of audio recordings and environmental variables (noise, brightness, temperature and humidity).
    For the evaluation of the data, in addition to conventional metrics, Shannon's classical entropy, the EGCI (Ecoacustic Global Complexity Index) and the DML (distance metric learning) were used.
    Monitoring took place in two different NICUs and in an Adult ICU, which at the time was focused on treating patients with Covid-19.

5
  • MARCUS VINÍCIUS LIMA SANTOS
  • Automatic detection of the corneal epithelial layer from Scheimpflug images

  • Líder : AYDANO PAMPONET MACHADO
  • MIEMBROS DE LA BANCA :
  • AYDANO PAMPONET MACHADO
  • EDILEUZA VIRGINIO LEÃO
  • JOÃO MARCELO DE ALMEIDA GUSMÃO LYRA
  • MARCELO COSTA OLIVEIRA
  • Data: 30-may-2022


  • Resumen Espectáculo
  • Objective: Automatic epithelium identification in Scheimpflug camera images.

    Methods: A dataset of 279 normal corneas exams, obtained through the captures performed by a Scheimpflug camera, was analyzed. The proposed method has 4 steps: image acquisition; epithelium identification; separation of the curvatures of interest; and valuation and analysis of the epithelium.

    Results: The Canny, Zerocross and Log algorithms were able to detect the epithelium in the total measure, and the lowest averages found in both thicknesses were with the log and zerocross methods with their variations. These measurements provided 79.74 μm, 79.85 μm, and 80.38 μm in the normal thickness and 65.91 μm, 66.08 μm, and 67.25 μm in thickness by Euclidean distance. However, the zerocross had the lowest number of defective images, and log had more than 50% of the base images with problems. In the central measure, the lowest averages found in both thicknesses were also with the log and zerocross methods with their variations, which had 75.50 μm, 75.58 μm and 75.75 μm in the normal thickness and 61.61 μm, 61.70 μm and 62.04 μm in the thickness by the Euclidean distance.

    Conclusion: This work shows that with the images of the Scheimpflug camera and applying classic edge detection methods we were able to detect the epithelium in normal corneas. This shows that we were able to collect important information from the corneal layers and that this parameter can be added to any equipment that uses the Scheimpflug camera.

6
  • JOSÉ IRINEU FERREIRA JÚNIOR
  • A Simulation-Based Approach for the Validation of Biomedical Signal Acquisition Systems

  • Líder : ALVARO ALVARES VDE CARVALHO CESAR SOBRINHO
  • MIEMBROS DE LA BANCA :
  • ALVARO ALVARES VDE CARVALHO CESAR SOBRINHO
  • LEANDRO DIAS DA SILVA
  • THIAGO DAMASCENO CORDEIRO
  • ANGELO PERKUSICH
  • ANTONIO MARCUS NOGUEIRA LIMA
  • Data: 29-jun-2022


  • Resumen Espectáculo
  • Biomedical signal acquisition systems are relevant for patient monitoring and diagnosis. Therefore, such systems must comply with current regulatory standards and undergo periodic maintenance, in accordance with the requirements defined in normative resolutions of the Ministry of Health and the National Health Surveillance Agency. Thus, the objective of this work is to define a reliable simulation-based approach to assist manufacturers, validation agencies, and healthcare facilities (in a maintenance context) in the validation of multiple equipment or biomedical signal acquisition systems. The proposed approach consists of two parts that complement each other: software, represented by an application on a computing device (e.g., notebook); and hardware, represented by a Biomedical Signal Transceiver (BST) device. To validate the developed software and BST device, digitally generated characteristic signals and real biomedical signals from human beings obtained from the PhysioNet platform are considered. Besides, comparative testing are performed with two properly certified and approved commercial ECG systems.

7
  • TÁSSIO FERNANDES COSTA
  • User-Oriented Natural Human-Robot Control with Thin-Plate

    Splines and LRCNN

  • Líder : ALVARO ALVARES VDE CARVALHO CESAR SOBRINHO
  • MIEMBROS DE LA BANCA :
  • ALVARO ALVARES VDE CARVALHO CESAR SOBRINHO
  • LEANDRO DIAS DA SILVA
  • LEONARDO MELO DE MEDEIROS
  • ANGELO PERKUSICH
  • Lenardo Chaves e Silva
  • Data: 29-jun-2022


  • Resumen Espectáculo
  • Safety and effectiveness are crucial quality attributes for insulin
    infusion pump systems. Therefore, regulatory agencies require
    the quality evaluation and approval of such systems before the
    market to decrease the risk of harm, motivating the usage of a
    formal Model-Based Approach (MBA) to improve quality.
    Nevertheless, using a formal MBA increases costs and
    development time because it requires expert knowledge and
    thorough analyses of behaviors. We aim to assist the quality
    evaluation of such systems in a cost-effective and time-efficient
    manner, providing re-usable project artifacts by applying our
    proposed approach (named MBA with CPN - MBA/CPN). We
    defined a Coloured Petri nets MBA and a case study on a
    commercial insulin infusion pump system to verify and validate a
    reference model (as a component of MBA/CPN), describing
    quality assessment scenarios. We also conducted an empirical
    evaluation to verify the productivity and reusability of modelers
    when using the reference model. Such a model is relevant to
    reason about behaviors and quality evaluation of such
    concurrent and complex systems. During the empirical
    evaluation, using the reference model, 66.7% of the 12
    interviewed modelers stated no effort, while 8.3%, stated low
    effort, 16.7% medium effort, and 8.3% considerable effort.
    Based on the modelers` knowledge, we implemented a web-
    based application to assist them in re-using our proposed
    approach, enabling simulation-based training. Although a
    reduced number of modelers experimented with our approach,
    such an evaluation provided insights to improve the MBA/CPN.
    Given the empirical evaluation and the case study results,
    MBA/CPN showed to be relevant to assess the quality of insulin
    infusion pump systems.

8
  • GIANCARLO LIMA TORRES
  • A DATA SCIENCE-BASED SOCIOECONOMIC PRICE ANALYSIS FOR TRANSPORT TRAVEL BY UBER APP

  • Líder : BRUNO ALMEIDA PIMENTEL
  • MIEMBROS DE LA BANCA :
  • BRUNO ALMEIDA PIMENTEL
  • EVANDRO DE BARROS COSTA
  • RAFAEL DE AMORIM SILVA
  • DIEGO CARVALHO DO NASCIMENTO
  • Data: 17-ago-2022


  • Resumen Espectáculo
  • Studies using data from the transport company Uber showed that time and distance are related to the pricing process of its travel service, making it possible to improve supply and demand strategies. However, this process may present other factors that contribute to the design of these prices. This research aims to analyze the travel routes of low-income users and contribute to reducing the prices of these trips in transport by the Uber app. For this, we seek to answer: If a financial center were closer to economically poorer neighborhoods, would there be a change in average prices? Could this change financially improve the lives of low-income people? We observed that, for a given region, if the financial center is not concentrated in high-income neighborhoods, it would be possible to reduce travel prices by about 43.07% for users in lowincome neighborhoods. This reduction would represent a monthly savings of around 18.82% of their income. For users living in wealthy (high-income) neighborhoods, this decentralization would increase travel costs to just over 100%. However, this increase would represent 6.71% of their income. For two regions, an increase in the average price of these trips was evidenced, confirming a trend of price increase when the destination of a trip is a financial center. Because of this, we proposed a new functionality in the Uber travel service to give more freedom to the user. This functionality would be the choice of a trip using a price bid in which the application returns the best distances based on the offered price. We created distance prediction models to achieve this objective, using regressor algorithms in which the Random Forest model presented an average coefficient of determination of 94%. 

9
  • FLÁVIO OSCAR HAHN
  • Framework based on optimization models for personal scheduling during pandemic events

  • Líder : RIAN GABRIEL SANTOS PINHEIRO
  • MIEMBROS DE LA BANCA :
  • ANAND SUBRAMANIAN
  • BRUNO COSTA E SILVA NOGUEIRA
  • ERICK DE ANDRADE BARBOZA
  • RIAN GABRIEL SANTOS PINHEIRO
  • Data: 23-ago-2022


  • Resumen Espectáculo

  • In the recent years, companies were forced to adapt to the new guidelines and strategies to prevent and reduce transmission of COVID-19 in the workplace.
    One of the main challenges in this adaptation is to effectively manage the workday schedule in order to reduce social contact.
    This work presents a comprehensive optimization framework for automatically planning employee (staff) schedules during pandemic events.
    Our framework uses integer linear programming for defining a set of general constraints that can be used to represent several types of distancing restrictions and different objective functions.
    To use the framework, a company must simply instantiate a subset of these constraints with an objective function (according to its priorities).
    We tested our scheduling framework in three different real-life companies, and the results show that our approach is able to improve the number of in-person workers by 15\% while attending the companies social distance restrictions.

10
  • MARCOS VINÍCIUS SILVA BENTO
  • Proposal and Evaluation of a Prognosis Model for Patients with SEPSE

  • Líder : RAFAEL DE AMORIM SILVA
  • MIEMBROS DE LA BANCA :
  • RAFAEL DE AMORIM SILVA
  • BRUNO ALMEIDA PIMENTEL
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • ALMIR PEREIRA GUIMARAES
  • Data: 26-ago-2022


  • Resumen Espectáculo
  • Septicemia or SEPSE is an infectious disease that has a high mortality rate, mainly affecting those who had a relatively low immunity such as newborns, pregnant women and the elderly, in addition to being developed from contamination, either by contact, by air or hospital, high in mortality is directly linked to the time it takes to diagnose this disease and its treatment. Thus, the diagnosis allowed the disease and his health to cause an electronic automation in a person's devices. In addition, correcting the initial treatment of this disease infers a reduction in the risk of death for patients with this disease. This presents a supervised model for the preventive work of patients who present symptoms of sepsis. The machine consists of building learning learning models without predicting a prediction in EPSE data prediction from data obtained in models - and monitoring available internal ICU data from internal collaboration data available on PhysioNet.

11
  • MAXWELL ESDRA ACIOLI SILVA
  • Proposal and Evaluation of a Hybrid Characteristic Selection Model for Breast Cancer Prognosis

  • Líder : RAFAEL DE AMORIM SILVA
  • MIEMBROS DE LA BANCA :
  • ALMIR PEREIRA GUIMARAES
  • BRUNO ALMEIDA PIMENTEL
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • RAFAEL DE AMORIM SILVA
  • Data: 26-ago-2022


  • Resumen Espectáculo
  • Artificial Intelligence technology has been instrumental in the role of health care in society. It has been used in several branches of medicine. One of its main applications is in the context of the prognosis of breast cancer disease. Cancer is considered to be the second leading cause of death from disease in the world. In this context, breast cancer stands out, which is considered the highest occurrence of cancer among women in the world. One of the main challenges in this scenario is to identify which are the most relevant characteristics in the development of this type of neoplasia by a patient. This type of filter is carried out using the characteristic selection methods. This work presents a hybrid model of selection of characteristics, which is used by the patient's clinicians, in order to make a prediction of breast cancer recurrence.

12
  • NAELSON DOUGLAS CIRILO OLIVEIRA
  • Lint-Based Warnings in Python Code: Frequency, Awareness, and Refactoring

  • Líder : MARCIO DE MEDEIROS RIBEIRO
  • MIEMBROS DE LA BANCA :
  • MARCIO DE MEDEIROS RIBEIRO
  • BALDOINO FONSECA DOS SANTOS NETO
  • ROHIT GHEYI
  • Data: 29-ago-2022


  • Resumen Espectáculo
  • Python is a dynamic programming language characterized both by its simplicity and easy learning curve. Such characteristics rendered it to a broad adoption level both in industry and academy. The language has a set of good practices which should be followed to optimize quality software metrics such as readability, security, code maintainability and others. A consolidated approach for the detection of these situations where the good practices are not being followed is the use of linting tools. In the case where the linting tool detects a good practice deviation, it notifies the code maintainer using the language warning mechanism. Thereafter the code maintainer is able to perform the needed refactoring to the warning. This work focuses on the context of lint-based Python warnings, more specifically we investigate three different aspects: how frequent they are in real Python applications. The perception of developers on the subject. And the refactorings of these warnings.In order to cover this topic we conduct three different studies: on the first we analyze 1,119 public open source Python repositories on Github and we characterize the existence of these six warnings on them. We also conducted a survey with Python developers from 18 different countries asking them about their perception of Python code with warnings versus a refactored version of the same code.  Thereafter we refactored warnings detected among 55 different public open source repositories and submitted a pull request fixing them. Afterwards we were able to analyze the acceptability rate of these pull requests by the repositories maintainers. Our results show that 39% of the 1,119 projects have at least one lint-based warning. After analyzing the survey data, we also show that developers prefer Python code without lint-based warnings. Regarding the pull requests, we achieve a 71.8% of acceptance rate. Finally, we implemented a tool able to automatically apply refactorings to the selected warnings and which removes from the code maintainer the responsibility of manually refactoring the warnings.

13
  • VICTOR GABRIEL LIMA HOLANDA
  • Proposal and Evaluation of a prognosis model for tachycardia-diagnosed patients

     

  • Líder : RAFAEL DE AMORIM SILVA
  • MIEMBROS DE LA BANCA :
  • RAFAEL DE AMORIM SILVA
  • BRUNO ALMEIDA PIMENTEL
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • ALMIR PEREIRA GUIMARAES
  • Data: 29-ago-2022


  • Resumen Espectáculo
  • According to the World Health Organization, heart disease is the leading cause of death in the world. Thousands of people die every year due to complications caused by this type of disease. In this sense, cardiologists seek early diagnosis, preventing the disease from reaching the pathological phase (i.e. the stages where limitations arise and become increasingly greater for its carriers). In the context of tachycardia-type arrhythmias, patients need to adopt the continuous use of medication and change habits to reduce the impact caused by this type of disease, thus providing a healthier life. Therefore, this study proposes and evaluates a prognostic model for arrhythmias that combines information regarding the patient's lifestyle, comorbidities and general health status to predict morbidity and mortality in patients diagnosed with tachycardia. The proposed model uses as parameters the CHA2DS2-VASc score, the clinical data and the patient's habits that, together with the use of SVM (Support Vector Machines) and KNN (K nearest neighbors) is able to classify the degree of morbidity and estimate the life perspective. In this way, the physician has access to information that should help in the decision making about the steps that will be taken in the treatment of that patient, which may involve changes in medication and dosages, levels of rigidity with habits, and in critical cases surgical intervention, always aiming to extend the patient's longevity.

14
  • FABIANO SANTOS CONRADO
  • Proposal and Evaluation of a Prognosis Model For Patients with Pancreas Cancer 

  • Líder : RAFAEL DE AMORIM SILVA
  • MIEMBROS DE LA BANCA :
  • ALMIR PEREIRA GUIMARAES
  • BRUNO ALMEIDA PIMENTEL
  • RAFAEL DE AMORIM SILVA
  • Data: 29-ago-2022


  • Resumen Espectáculo
  • Pancreas Cancer (PC) is difficult to diagnose beforehand, because it evolves silently, without showing specific signs and has a low response to most treatments. The overall 5-year survival rate is 7\% after being diagnosed, with a higher survival rate for patients who do not have metastatic disease. This low rate leads the patient to question how much time he has left to live, the more accurate the survival calculation, the more accurate the answer, allowing the doctor to prescribe the most appropriate palliative treatment.The Classification of Malignant Tumours (TNM) has been the most common metric to answer this question, but it is insufficient. This research proposed and evaluated a prognosis model for PC that combines information regarding the patient's lifestyle, their comorbidities and their general health status to predict the time of mortality in patients diagnosed with PC. For this, data from several centers were used combined with data simulation, and ML techniques using KNN/SVM for correct classification and survival prediction. The results point to a probabilistic calculation with greater precision in the calculation compared to the traditional method of exclusive use of TNM.

15
  • CÁSSIO AQUINO ROCHA
  • Algorithms for Itinerary Optimization Tourists: an application in the state of Alagoas

  • Líder : BRUNO COSTA E SILVA NOGUEIRA
  • MIEMBROS DE LA BANCA :
  • BRUNO COSTA E SILVA NOGUEIRA
  • BRUNO ALMEIDA PIMENTEL
  • RIAN GABRIEL SANTOS PINHEIRO
  • EDUARDO VIEIRA QUEIROGA
  • Data: 30-ago-2022


  • Resumen Espectáculo
  • This work proposes an exact method for the Orientation Problem with Hotel Selection and Time Windows (OPHS-TW) and uses it in the context of Alagoas. In OPHS-TW, a set of vertices with scores and time windows are given, and a set of hotels. The objective is to determine a fixed number of connected trips that visit some vertices and maximize the sum of the collected scores. As far as we know, this is the first exact method for OPHS-TW. Our exact model was developed using Integer Linear Programming (ILP). Computational experiments performed on OPHS TW instances found in the literature show that our exact method for OPHS-TW is capable of proving several previously unknown optima. Our algorithm found 33 unknown solutions from the literature. Of these unknown solutions, 32 were proven to be optimal. In total, 357 solutions proved to be optimal.

16
  • WILLIAN VICTOR DA SILVA
  • A Method for automatic disambiguation in Portuguese, integrated with the Falibras System translation process

  • Líder : PATRICK HENRIQUE DA SILVA BRITO
  • MIEMBROS DE LA BANCA :
  • JOSÉ MARIO DE MARTINO
  • EVANDRO DE BARROS COSTA
  • PATRICK HENRIQUE DA SILVA BRITO
  • Data: 30-ago-2022


  • Resumen Espectáculo
  • There are more than 1 billion people in the world with some type of disability. In Brazil, this reality corresponds to about 23.9% of the 190 million Brazilians; among these 9.6 million have some hearing impairment. Deafness makes social interaction considerably more difficult, since it inhibits the individual from communicating through the oral-auditory pathway. These communication problems usually impair considerably the interaction of deaf students with fellow listeners, impairing the process of social integration. To facilitate communication between deaf people and listeners, Portuguese-Libras machine translation tools can be used. However, according to reports in the literature, about 75% of the deaf community feels dissatisfied with the translation of the existing tools and among the main causes of this dissatisfaction are the use of inappropriate signs for words with semantic ambiguity (eg, law, public ). In this work, the improvement of the Falibras System translation module is proposed, with the objective of improving the quality of the translation with respect to the criticisms observed in the literature. The main objectives of the proposed project are: (1) to know the state of the art regarding the resolution of ambiguities in Portuguese; and (2) combining existing techniques to develop an ambiguity resolution module for Falibras. The evaluation of the results must consider both the use of computational resources, as well as qualitative metrics of precision and recall, in a comparative way with existing works in the literature. The evaluation will be conducted based on the Goal-Question-Metric model.

17
  • FLAVIO VASCONCELOS PAIS
  • PERFORMANCE EVALUATION OF URBAN TRAFFIC USING SIMULATION: A CASE STUDY IN MACEIÓ/AL.

  • Líder : BRUNO COSTA E SILVA NOGUEIRA
  • MIEMBROS DE LA BANCA :
  • BRUNO COSTA E SILVA NOGUEIRA
  • ANDRE LUIZ LINS DE AQUINO
  • RIAN GABRIEL SANTOS PINHEIRO
  • EDUARDO ANTÔNIO GUIMARÃES TAVARES
  • Data: 31-ago-2022


  • Resumen Espectáculo
  • Latin American metropolises have been facing serious traffic congestion problems as a result of rapid population growth, increasing vehicle numbers and inefficient public policies. Most cities do not have a real-time urban traffic control system to optimize the flow of vehicles. In this context, the use of urban traffic optimization models appears as a low cost alternative to evaluate several problems and promote possible improvements. Through the Urban Mobility Simulator (SUMO), this paper proposes a new urban traffic simulation model for a prominent road in the city of Maceió, Alagoas, Brazil. The chosen road was Fernandes Lima Avenue, as it represents one of the most important road corridors of the city. The proposed model allows understanding the behavior of the vehicle flow on the road, which has peculiarities such as the blue lane (exclusive for public transportation) and three sections with pedestrian traffic lights. The validation of the proposed model was done considering data obtained from real observations. The results indicate that the model is capable of providing estimates with errors smaller than 5% related to the volume of vehicle traffic at signalized intersections and smaller than 10% related to the total average travel time of vehicles traveling along the entire avenue. Finally, the analysis of the experimental results shows that it is possible to use the proposed model to apply possible interventions on the street, such as the removal of the pedestrian signals and the blue stripe, resulting in an increase in vehicle flow, reduction of travel time, fuel consumption, and carbon dioxide emissions.

18
  • MARCOS ANTONIO BARBOSA LIMA
  • Particle Controlling in Acoustic Levitation Systems 

  • Líder : LEANDRO MELO DE SALES
  • MIEMBROS DE LA BANCA :
  • LEANDRO MELO DE SALES
  • EVANDRO DE BARROS COSTA
  • JOSEANA MACEDO FECHINE
  • Data: 31-ago-2022


  • Resumen Espectáculo
  • Acoustic levitation is a method that allows the manipulation of materials without solid contact through sound radiation. Its use has several advantages in the handling of chemical compounds, biological samples and in the manufacture of microelectronic devices. The handling of these materials represents a great challenge, due to their dimensions and sensitivity to heat and electromagnetic interference. Devices for linear material transport enable the displacement of particles vertically or horizontally, through arrangements of Langevin transducers at one end and a reflector or transducer at the other. Thus, the capture of the particle occurs at the nodal points of the resulting standing wave and its movement occurs through the modulation of the amplitude of the vibrations of the transducers. Newer devices allow the capture of particles and their displacement in three dimensions simultaneously, as well as their translation and rotation. In this approach, the point in the space where you want to levitate a particle is used as an input to an optimization algorithm, the results of which represent the phase delays in a matrix of Langevin
    transducers and the resulting acoustic traps. This work aims to model a particle transport device in three dimensions, using a matrix of transducers of 16mm in diameter and 40KHz frequency, controlled by a computer system on a web platform. In this computer system, the coordinates for the acoustic levitation points are previously calculated and stored, allowing a greater speed
    of actuation of the actuators on the levitated particles. As a means of integration between the system and the transducer matrix, Raspberry Pi and Arduino Mega 2560 boards are used, which convert the values previously stored in the control system database into phase delays in the transducer matrix. As a result, the particle transport control is obtained remotely, enabling the decentralization of the tests, with actuators and control systems in different locations, using a data network for this. Additionally, optimization based on BRKGA Meta-heuristics is used, replacing the classic BFGS algorithm to optimize phase delays, thus reducing the time required for the development of computer systems.

19
  • JOAO LUIZ ALVES OLIVEIRA
  • Public transport system optimization using biased random key genetic algorithm

  • Líder : BRUNO COSTA E SILVA NOGUEIRA
  • MIEMBROS DE LA BANCA :
  • BRUNO COSTA E SILVA NOGUEIRA
  • ANDRE LUIZ LINS DE AQUINO
  • RIAN GABRIEL SANTOS PINHEIRO
  • GUSTAVO RAU DE ALMEIDA CALLOU
  • Data: 14-sep-2022


  • Resumen Espectáculo
  • The economy of a city or region is directly proportional to its public transport system efficiency. Planning of a public transport system depends on several factors such as transport modals, origin-destination demands, quality and reliability of this service, operational costs, among others. These features leads to very complex problems like network design and vehicles frequency setting. It is usual to use heuristics, like genetic algorithm, to solve such problems in real data cases since these has large instances. In this context, present work focus to propose an optimization methodology, using biased random-key genetic algorithms, for a public bus transport system considering two aspects: (i) service quality, by minimizing passengers waiting time; (ii) economic analysis to the concessionary company, by minimizing operational cost. Thus, this methodology is applied in a case study with Maceió bus transport data to propose two frequencies scenarios that performs over 10% better than actual one in both aforesaid aspects. Finally, futures jobs would be done in order to develop multi-objective optimizations to find compound solutions.

20
  • FILIPE FALCAO BATISTA DOS SANTOS
  • ASSESSING THE DEPENDABILITY OF MACHINE LEARNING CLOUD SERVICES THROUGH FAULT INJECTION

  • Líder : BALDOINO FONSECA DOS SANTOS NETO
  • MIEMBROS DE LA BANCA :
  • BALDOINO FONSECA DOS SANTOS NETO
  • MARCELO COSTA OLIVEIRA
  • MARCIO DE MEDEIROS RIBEIRO
  • ROHIT GHEYI
  • Data: 23-sep-2022


  • Resumen Espectáculo
  • The growing amount of public and private data generated from different data sources has increased the interest on technologies capable of extracting useful knowledge from large, usually unstructured, collections of data. Machine Learning (ML) techniques have been successfully employed for that purpose by both the academia and the software industry. However, building ML systems can be difficult, since a massive amount of training data and expensive computational resources are often required. Also, machine learning technologies have a steep learning curve. Those difficulties led to the popularization of ML cloud services, where users are able to perform ML tasks by simply sending their data to a cloud provider over APIs. However, little research has addressed the effect of typical data faults on the dependability of machine learning systems. Such faults may originate on the applications that rely on ML cloud services, being caused by hardware or connection failures, bugs, undefined behavior. As a consequence, those faults can be reflected on the data produced by such applications and sent to the machine learning services. Seeking to evaluate the dependability of machine learning cloud services, this work presents the proposal of a study on the injection of common data faults into the input data passed to a set of commercial ML services.

21
  • BRENO FELIX DE SOUSA
  • Estereótipos Sexuais, Expectativa de Desempenho e Experiência de Fluxo em Sistemas de Tutoria Gamificados

  • Líder : IG IBERT BITTENCOURT SANTANA PINTO
  • MIEMBROS DE LA BANCA :
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • GEISER CHALCO CHALLCO
  • IG IBERT BITTENCOURT SANTANA PINTO
  • LEOGILDO ALVES FREIRES
  • TELMA LOW SILVA JUNQUEIRA
  • Data: 08-dic-2022


  • Resumen Espectáculo
  • Education in Statistic, Technology, Engineering and Mathematics (STEM) is permeated by
    a traditional cis-heteronormative culture that dictates who stays in these spaces and who
    will evade these courses. There is the idea that heteronormativity is the default status quo
    and everything that deviates from it is seen as minority groups in STEM. The literature
    presented in this dissertation confirms these statements and highlights the importance of this
    work, while also showing with theories and quasi-experiments these assumptions existing
    in traditional STEM teaching environments, such as the classroom and in virtual teaching
    environments. STEM in its current teaching and inclusion model continues to perpetuate
    this non-inclusive tradition. The use of educational technologies is increasingly present
    in the classroom, especially in a current scenario where remote education is so necessary.
    In this sense, gamification emerges as a powerful alternative capable of providing virtual
    teaching environments. Gamification can be understood as the use of game concepts in
    non-game environments, it is the use of concepts, techniques, forms, methods, elements
    and variables characteristic of games that can be used in the development of a teaching
    environment. The use of gamification in educational environments can be an alternative
    to enable effective distance learning. Gamification in virtual teaching environments can be
    considered a powerful technique for students and teachers to interact with the teaching
    environment. The concern with teaching and learning is not recent, in the literature it is easy
    to find studies related to this concern. Gamification can replicate stereotypes traditionally
    conventionalized in STEM, when gamification elements such as colors, phrases, avatars,
    sound effects and others are applied correctly, it contributes positively to learning, while,
    when designed without a contextual-cultural analysis, it can reaffirm stereotypes of traditional
    environments in gamified environments. Gamified and stereotyped technologies can directly
    impact student performance, interfering with learning and even their evolution in teaching. In
    this context, this study presented and identified the effects of stereotypes in gamified teaching
    environments and their relationship with the flow experience, performance expectation and
    performance of socially minorized groups such as lesbians, gays, bisexuals, transvestites,
    transgender men, queer, intersex, asexuals, pansexuals, more diversity (LGBTQIAP+) in
    STEM courses. For this, a systematic review is presented followed by a meta-analysis, a
    quasi-experiment and a qualitative study. In this way, therefore, this study contributes to
    the understanding of the impact that sexual stereotypes in gamified environments have on
    student performance. It is hoped that the results of this research can also contribute to a
    better inclusion of sexual diversity.

22
  • JESSICA FERNANDA SILVA BARBOSA
  • Investigating Gender Stereotypes, Negative Thinking and Flow Experience in Gamified Educational Technologies

  • Líder : IG IBERT BITTENCOURT SANTANA PINTO
  • MIEMBROS DE LA BANCA :
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • GEISER CHALCO CHALLCO
  • IG IBERT BITTENCOURT SANTANA PINTO
  • LEONARDO BRANDAO MARQUES
  • Data: 08-dic-2022


  • Resumen Espectáculo
  • Using gamification in classrooms has become an interesting attractive alternative in educational digital technologies, with the objective convert tedious learning activities into engaging ones. Nonetheless, its use sometimes fails to bring the expected results. Stereotyping is between the factors that may negatively affect learning in gamified digital technologies, thus hindering student performance or even inducing cognitive interference in the individual, such as negative thinking. Under these circumstances, instead of directing effort and concentration in learning activities, students are conducted to disperse leading to a performance drop. Considering this possibility, the present study aims to identify and analyze effects of gender stereotypes on the flow experience, negative thinking and learning performance in gamified digital technologies. In order to achieve this goal, systematic literature reviews and quasi-experimental studies will be conducted. The former may provide an overview of how stereotypes and gamification effects over negative thinking, flow experience and learning performance, whilst the latter will be conducted in order to try to explain consequences of gender stereotyping in gamified digital technologies. Therefore, this study has the potential to provide evidence of stereotypes’ influence in gamified digital technologies: either leading to negative thoughts, affecting the flow experience and contributing to poor learning performances. Furthermore, we aim to identify if a gender is more affected by stereotyping and list possible causes of these effects. Results expected may be able to contribute to develop guidelines, advices and practices adapted to avoid the stereotype threat while implementing gamified digital technologies, seeking equitable environments and promoting gender equity

23
  • FRANCYS RAFAEL DO NASCIMENTO MARTINS
  • Investigating Implicit Gender Stereotypes, Self-Efficacy and Experience Flow in Gamified Educational Environments

  • Líder : IG IBERT BITTENCOURT SANTANA PINTO
  • MIEMBROS DE LA BANCA :
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • GEISER CHALCO CHALLCO
  • IG IBERT BITTENCOURT SANTANA PINTO
  • LEONARDO BRANDAO MARQUES
  • Data: 08-dic-2022


  • Resumen Espectáculo
  • The use of gamification has been widely studied in recent years, particularly when it is used
    as an intervention to increase motivation and engagement in educational settings. Even with
    evidence that gamification causes positive impacts, there are also studies that point to problems
    in its application. In some cases, positive results are not achieved due to the influence of several
    factors, one of which is the threat of gender stereotyping, which can impact the student’s level
    of self-efficacy and performance in learning. This is because self-efficacy is the feeling of having
    the ability to achieve goals. Thus, when a given task is proposed and the individual does not
    believe he has the necessary capabilities to perform it, success in completing the task is weak.
    This factor influences the learning performance and can cause negative impacts. For example,
    the stereotype that men are more skilled in math can negatively influence some women, causing
    them to have a very low perception of self-efficacy, causing problems in learning, even if the
    activities are carried out in a gamified environment. Thus, in this dissertation, our objective is
    to identify and explain how gender stereotypes impact on self-efficacy, on the flow experience (a
    state of total immersion desired in educational settings) and on student learning performance in
    gamified online educational environments. For this, systematic literature review (meta-analysis)
    and quasi-experimental studies will be conducted. The systematic reviews provided an overview
    of how gender stereotypes and gamification affect self-efficacy, flow experience and learning
    performance. Quantitative and qualitative quasi-experimental studies will be conducted to
    identify and explain the effects caused by gender stereotyping in gamified online educational
    environments. Thus, we aim to provide evidence regarding whether gender stereotypes affect
    the flow experience, whether they affect self-efficacy and whether they contribute to poor
    student learning performance. As a result, we also hope to identify if there is a gender that
    might be more affected by stereotype and understand the causes of these effects. This evidence
    will contribute to the generation of guidelines, recommendations and practices that result in
    adaptations and implementations of gamified online educational environments without the
    threat of stereotyping, the creation of fairer environments that promote gender equity.

2021
Disertaciones
1
  • BRUNO GEORGEVICH FERREIRA
  • New Neural Network Model for Object Detection Applied to Industrial Inspection
  • Líder : TIAGO FIGUEIREDO VIEIRA
  • MIEMBROS DE LA BANCA :
  • DOUGLAS CEDRIM OLIVEIRA
  • THALES MIRANDA DE ALMEIDA VIEIRA
  • TIAGO FIGUEIREDO VIEIRA
  • Data: 23-abr-2021


  • Resumen Espectáculo
  • In many industries, assembling specific components to be inserted into a plastic container is a manual procedure. Each kit must be comprised of specific parts following a pre-defined recipe, which can be updated throughout time. Kits assembled inadequately cause rework, reducing production quality and time. Here we propose improvements in an object detection model, capable of performing a quality inspection, increasing the features of Few-Shot Object Detection (FSOD) based on the OS2D model, previously proposed in the literature. The OS2D model has limitations when trying to detect objects of aspect ratio that do not fit the predetermined anchors. In addition, it also has an inference mechanism that restricts it to only one reference image for each class, making it difficult to detect more complex objects, which are different for each angle. Bearing in mind this, the improved OS2D model (OS2D+) was proposed, incorporating layers of distortion and correction and modifying its inference strategy to facilitate the use of multiple reference images per component. In order to be able to evaluate the results of the OS2D+ solution, a image processing based solution (PIMG) was also developed, so the results of both solutions can be compared. The distortion and correction layers of the OS2D+ solution allowing it to detect objects whose aspect ratio does not fit into any OS2D detection anchor. The inference mechanism of the OS2D model has also been modified, seeking to enable the inference of multiple reference images to a kit component. Finally, the proposed OS2D+ solution proved to be more robust than PIMG, detecting fewer false positives and negatives, in addition to presenting a shorter inference time. However, the PIMG solution was able to provide better bounding boxes (BB) estimations due to its process of proposing locations. Despite this, the OS2D+ solution has the potential to have equivalent estimations, requiring a fine adjustment in its parameters. A database composed of 111 photos was also built, describing five different kits and their respective components. This database was annotated and used to measure the results of the two proposed solutions.
2
  • GUSTAVO COSTA GOMES DE MELO
  • A low-cost IoT system for real-time monitoring of climatic variables and photovoltaic generation for Smart Grid application

  • Líder : ERICK DE ANDRADE BARBOZA
  • MIEMBROS DE LA BANCA :
  • MAURICIO BELTRAO DE ROSSITER CORREA
  • BRUNO COSTA E SILVA NOGUEIRA
  • ERICK DE ANDRADE BARBOZA
  • Data: 28-abr-2021


  • Resumen Espectáculo
  • The share of renewable energies in electricity generation has been growing worldwide. Monitoring and acquiring data is essential to recognize the renewable resources available on-site, evaluate the efficiency of electrical conversion, detect failures and optimize electrical production. Commercial monitoring systems for the photovoltaic system are generally expensive and closed for modifications. This work proposes a low-cost real-time IoT system, for micro and mini photovoltaic generation systems, that can monitor DC voltage, DC current, AC power, and seven meteorological variables. The proposed system measures all the relevant meteorological variables, it measures PV generation variables directly from the plant (not from the inverter) it is implemented using open software, connects with the internet without cables, storages data locally and in the cloud, and uses Network Time Protocol (NTP) to synchronize the devices' clocks. To the best of our knowledge, no work reported in the literature presents these features altogether. Besides, experiments carried out with the proposed system showed good effectiveness and reliability. This system enables the use of fog and cloud computing in a photovoltaic system, and the creation of a time series measurements dataset, which enables the use of machine learning to create smart photovoltaic systems.

3
  • ADRIANO DA SILVA ARAÚJO
  • Title

  • Líder : LEANDRO DIAS DA SILVA
  • MIEMBROS DE LA BANCA :
  • LEANDRO DIAS DA SILVA
  • ALVARO ALVARES VDE CARVALHO CESAR SOBRINHO
  • IVO AUGUSTO ANDRADE ROCHA CALADO
  • RAFAEL DE AMORIM SILVA
  • IVANOVITCH MEDEIROS DANTAS DA SILVA
  • Data: 30-abr-2021


  • Resumen Espectáculo
  • Abstract

4
  • MARIO DIEGO FERREIRA DOS SANTOS
  • A comparative study of GPU metaheuristics for data clustering

  • Líder : BRUNO COSTA E SILVA NOGUEIRA
  • MIEMBROS DE LA BANCA :
  • BRUNO COSTA E SILVA NOGUEIRA
  • ERICK DE ANDRADE BARBOZA
  • RIAN GABRIEL SANTOS PINHEIRO
  • ERMESON CARNEIRO DE ANDRADE
  • Data: 30-abr-2021


  • Resumen Espectáculo
  • Clustering is a fundamental class of problems with numerous applications in many areas of knowledge, including: bioinformatics, computer vision, data mining, text mining, and web page grouping. Given a set of n objects, clustering aims to automatically group such objects in k groups, usually disjunct, called clusters or groupings using a pre-established similarity measure. Clustering problems in general have high computational complexity and involve a large amount of input data. Thus, the use of parallel architectures such as Graphics Processing Units (GPUs) is interesting alternatives to accelerate the clustering process. In this work, we conducted a comparative study of GPU-accelerated metaheuristics for grouping data. Three population meta-heuristics were implemented in the GPU: Particle Swarm Optimization (PSO), Differential Evolution (DE), and Scatter Search (SS). The implementation of these meta-heuristics was divided into two parts: the independent part of the problem and the dependent part of the problem. The independent part of the problem refers to the selection, replacement, and combination operators for each meta-heuristic, while the dependent part refers to the objective function. The independent part was implemented using the libCudaOptimize framework, and the dependent part was created by transforming the clustering problem into a global optimization problem subject to cash constraints. The proposed meta-heuristics were compared with the best current clustering algorithm considering the efficiency of the execution time and the quality of the solution. The results indicate that the GPU-based PSO (GPU-PSO) obtained the best results in comparison with the other GPU-based metaheuristics and the best method today. Also, our GPU implementation of the objective function obtained an average speedup of 175x over the sequential version. These results demonstrate that a GPU approach to the clustering problem is very promising.

5
  • JEFFERSON DAVID DOS ANJOS SILVA
  • Title

  • Líder : LEANDRO DIAS DA SILVA
  • MIEMBROS DE LA BANCA :
  • LEANDRO DIAS DA SILVA
  • IVO AUGUSTO ANDRADE ROCHA CALADO
  • RAFAEL DE AMORIM SILVA
  • IVANOVITCH MEDEIROS DANTAS DA SILVA
  • Data: 30-abr-2021


  • Resumen Espectáculo
  • Abstract

6
  • ALEXANDRE SERGIO DANTAS DE LIMA
  • Multipopular BRKGA algorithm for the automatic clustering problem

  • Líder : RIAN GABRIEL SANTOS PINHEIRO
  • MIEMBROS DE LA BANCA :
  • JEAN CARLOS TEIXEIRA DE ARAUJO
  • ANDRE LUIZ LINS DE AQUINO
  • BRUNO COSTA E SILVA NOGUEIRA
  • RIAN GABRIEL SANTOS PINHEIRO
  • Data: 30-abr-2021


  • Resumen Espectáculo
  • The process of grouping data is known as clustering. In the literature, the clustering process, or grouping, has two variations: (i) if the number of clusters is predefined, this process is known as the Clustering Problem (CP) or k-Clustering Problem, and (ii) when the number of clusters is not defined, the process becomes known as the Automatic Clustering Problem (PCA). The Automatic Clustering Problem is classified as NP-difficult, which prevents the exact value of the clusters. The importance of having well-grouped data is crucial for
    making more assertive decisions. The data grouping technique has applicability in the most diverse areas of knowledge, such as: engineering, administration, economics, biology, physics, among others, which present problems with the representation of mathematical models. Genetic algorithms represent a class
    of algorithms that can be used to solve this type of problem, these algorithms are based on Darwinian evolution processes, selecting the best solutions within a population. The BRKGA (Bia-sed Random Key Genetic Algorithm), in translation Genetic Algorithm of Addicted Random Keys is presented as a variation of the genetic algorithms, in which the solutions of a problem are represented as vectors of real keys, generated randomly, in the continuous interval of [0.1). The fitness of a viable solution is determined by the decoder that maps this vector to a real value. This work proposes a multipopular BRKGA to identify the ideal number of declusters, in accordance with the silhouette index, a measure of grouping efficiency widely used in the literature. In the proposed algorithm, the solution space is partitioned so that each subpopulation the algorithm represents solutions with a cluster k number. Thus, for each subpopulation, an independent BRKGA is applied. Computational experiments were carried out in fifty-five instances of the literature. The proposed algorithm is compared with existing methods in the literature, showing superior results, leading to the understanding that the algorithm is promising.

7
  • BRUNO GABRIEL CAVALCANTE LIMA
  • User-Oriented Natural Human-Robot Control with Thin-Plate

    Splines and LRCNN

  • Líder : TIAGO FIGUEIREDO VIEIRA
  • MIEMBROS DE LA BANCA :
  • DOUGLAS CEDRIM OLIVEIRA
  • ICARO BEZERRA QUEIROZ DE ARAUJO
  • THALES MIRANDA DE ALMEIDA VIEIRA
  • TIAGO FIGUEIREDO VIEIRA
  • Data: 26-may-2021


  • Resumen Espectáculo
  • This work proposes a novel vision-based robotic-arm
    teleoperation approach. By using a single depth-based camera,
    such an approach exempts the user from using any wearable
    devices. Through applying a natural user interface, such an
    approach also leverages the conventional fine-tuning process of
    the robotic position control calibration, turning the process into a
    direct capture of the human body. The proposed approach
    consists of two main parts. The first is a nonlinear customizable
    movement mapping based on Thin-Plate Splines (TPS), to
    directly transfer human body motion to robotic arm motion. Such
    mapping allows for matching dissimilar bodies, with different
    kinematics constraints and different workspace shapes. The
    second is a Deep Neural Network hand-state classifier based on
    Long-term Recurrent Convolutional Networks (LRCN), which
    exploits the temporal coherence of the acquired depth data. In
    the end, validation and evaluation of the proposed approach are
    performed. For the hand-state classifier, a cross-validation
    experiment comparing the current approach with a baseline.
    Results reveal an increase in the classifier accuracy through
    exploring the temporal coherence present in sequential depth
    data. For the movement mapping, a user study is performed
    over a set of practical experiments involving variants of pick-
    and-place tasks in a simplified manufacturing environment. For
    this study, we developed a validation environment using Robot
    Operating System (ROS) as the main framework. Also
    compared to a baseline, the position mapping approach using 

    TPS revealed better comfort and precision of user control in
    regions near to robot workspace boundaries, where the baseline
    approach showed a poor performance. Moreover, results
    suggested the new approach did not present an increase in the
    experiment’s task difficulty.

8
  • JOAO LUCAS MARQUES CORREIA
  • Brazilian Data Scientists: Revealing their Challenges and Practices on Machine Learning Model Development

  • Líder : BALDOINO FONSECA DOS SANTOS NETO
  • MIEMBROS DE LA BANCA :
  • ALESSANDRO FABRICIO GARCIA
  • BALDOINO FONSECA DOS SANTOS NETO
  • RAFAEL MAIANI DE MELLO
  • THIAGO DAMASCENO CORDEIRO
  • Data: 09-jun-2021


  • Resumen Espectáculo
  • Data scientists often develop machine learning models to solve a variety of problems in the industry and academy. To build these models, these professionals usually perform activities that are also performed in the traditional software development lifecycle, such as eliciting and implementing requirements. One might argue that data scientists could rely on the engineering of traditional software development to build machine learning models. However, machine learning development presents certain characteristics, which may raise challenges that lead to the need for adopting new practices. The literature lacks in characterizing this knowledge from the perspective of the data scientists. In this paper, we characterize challenges and practices addressing the engineering of machine learning models that deserve attention from the research community. To this end, we performed a qualitative study with eight data scientists across five different companies having different levels of experience in developing machine learning models. Our findings suggest that: (i) data processing and feature engineering are the most challenging stages in the development of machine learning models; (ii) it is essential synergy between data scientists and domain experts in most of these stages; and (iii) the development of machine learning models lacks the support of a well-engineered process.

9
  • JORGE SANTOS LEANDRO
  • Patient-Specific Non-Invasive Parameter Estimation for 0D models of the Human Cardiovascular System Using Deep Learning

  • Líder : THIAGO DAMASCENO CORDEIRO
  • MIEMBROS DE LA BANCA :
  • THIAGO DAMASCENO CORDEIRO
  • XU YANG
  • ANTONIO MARCUS NOGUEIRA LIMA
  • Data: 27-ago-2021


  • Resumen Espectáculo
  • For patients with severe heart diseases, heart transplantation is still the best treatment option. However, the so-called Ventricular Assist Devices (VADs) have been used successfully to support the pumping of the cardiac muscle to meet the needs of the human cardiovascular system (CVS). The so-called lumped parameter (0D) models are of great importance for computational simulations of hemodynamic variables (HVs), either using CVS models or VAD models, making it possible to analyze the performance of different operation modes even before implanting the device in the patient. Furthermore, specific models for a given patient allow the tuning of control systems to be carried out according to the clinical situation of that patient. It is known that the parametric estimation process of such models requires patient data and the HVs of interest are not always. Thus, using HVs preferably obtained through non-invasive techniques and those considered common in the medical-hospital environment should be sought. This work investigates the feasibility of implementing a parametric estimation process of a 0D model of the human CVS for specific patients. For this purpose, deep learning techniques are used, having only the arterial systemic blood pressure signal as input signal since it is easily obtained using non-invasive methods. The sensitivity function is calculated to investigate the influence of the CVS parameter variation on all HVs since this correlation is directly related to the accuracy of the estimated parameter values. The results highlight the very low sensitivity of systemic pressure for certain parameters. This fact impairs their estimation and confirms the need to add more HVs as inputs to the estimator. The sensitivity study also highlights that some parameters' variation does not significantly influence any of the HVs, impairing the entire estimation process for a specific patient.

10
  • RANDY AMBROSIO QUINDAI JOAO
  • Systematic Review and Meta-Analysis – Processes Towards Selection Automation

  • Líder : ANDRE LUIZ LINS DE AQUINO
  • MIEMBROS DE LA BANCA :
  • ANDRE LUIZ LINS DE AQUINO
  • FABIANE DA SILVA QUEIROZ
  • JORGE ARTUR PECANHA DE MIRANDA COELHO
  • RIAN GABRIEL SANTOS PINHEIRO
  • Data: 28-ago-2021


  • Resumen Espectáculo
  • We present with this work processes towards selection automation to support the Systematic Review of Literature (SLR) so that the time in the analysis of articles in the selection phase is optimized, allowing the reviewers to move to the extraction phase in a faster and smoother way, based on statistical views. The standard SLR methodology has three main phases: planning of review, conducting of review, and dissemination. Conducting review is the most daunting phase because almost all steps refer to manual reading and categorization. In this phase, authors of reviews verify which studies meet the inclusion criteria for the extraction phase by a manual reading of title and abstract. We use topic modeling to map and search for relationships not so evident to human discernment, added to the efficiency of the average time for article analysis in the selection phase, regardless of the number of articles, but rather the computational power to reduce the time of this phase from months to hours or minutes. This work proposes some strategies in topics for clustering the studies, generation of summary for clustered studies, and data graphics. For better data stratification, we already tested some propositions, and we have achieved quite good results. We obtain these results by exploring the BIBTEXdata. For clustering of studies, we transformed the title, abstract and keywords, into a wordcloud for each study, and grouped using a natural language processing technique called Sentence Boundary Detection for finding and segmenting meaningful individual sentences, studies with same sentences are put together, organized, and clustered by sentences frequency. We achieve the generation of summary for clustered studies using natural language generation. We perform a comparison of Markov Chain Generation with Recurrent Neural Network generation for quality assessment of the generated text. We obtain data graphics by exploring BIBTEXdata already available, and min- ing relations of semantic changes or author’s groups of collaboration. The results methodology follows the best practices for conducting and reporting reviews, thus solving a practical problem effectively with reproducible and repeatable results. The main goal of review automation is to save lives by accelerating the adoption of better healthcare standards. We present techniques for studies summarization, categorization in related topics, and bibliography statistics.

11
  • LEANDRO MARTINS DE FREITAS
  • Surrogate Model to the Adaptive Control of Optical Amplifier Operating Point based on Machine Learning

  • Líder : ERICK DE ANDRADE BARBOZA
  • MIEMBROS DE LA BANCA :
  • CARMELO JOSE ALBANEZ BASTOS FILHO
  • ERICK DE ANDRADE BARBOZA
  • ICARO BEZERRA QUEIROZ DE ARAUJO
  • Data: 28-sep-2021


  • Resumen Espectáculo
  • The adaptive control of optical amplifier operating point (ACOP) is one of the problems presented in the challenge of Dynamic operation in optical communication and networks. The ACOP approaches aim to define the gains of the optical amplifiers dynamically to increase the transmission quality after a cascade of amplifiers. The most recent ACOP approach uses a multi-objective evolutionary optimization algorithm to define the gains of the amplifiers to maximize the optical signal to noise ratio (OSNR) and minimize OSNR ripple. Despite the promising results regarding Quality of Transmission, relying on an evolutionary algorithm to make decisions in real-time is not desirable because its iterative method usually implies high execution time. Therefore, this work proposes a surrogate model that can obtain solutions as good as the multi-objective evolutionary algorithm but in less time. We considered five machine learning (ML) regression techniques, trained with the optimization algorithm solutions. Results showed that for all cases, the regression median error is less than 1.15 dB and that one regressor can be used to define amplifiers' gains and variable optical attenuators' losses of an entire optical link. It also showed that the most straightforward regressor is 28,000 times faster than the evolutionary optimization approach.

12
  • GEYMERSON DOS SANTOS RAMOS
  • Optimizing Allocation and Handover Processes in Mobile Networks

  • Líder : ANDRE LUIZ LINS DE AQUINO
  • MIEMBROS DE LA BANCA :
  • ANDRE LUIZ LINS DE AQUINO
  • ERICK DE ANDRADE BARBOZA
  • RIAN GABRIEL SANTOS PINHEIRO
  • ALEXANDRE MENDES
  • MARILIA PASCOAL CURADO
  • Data: 27-oct-2021


  • Resumen Espectáculo
  • The growing number of devices connected to the Internet has required advances in wireless communication technologies. 4G networks are gradually being replaced by 5G, which proposes greater speeds, heterogeneity, and scalability. The fifth-generation also provides broad support for software-defined network (SDN) applications, which increase the programming flexibility of protocols and processes that were previously embedded and difficult to modify in network devices. This work aims to improve processes in 5G networks by using software-defines networks. SDN applications can achieve improvements in mobility, load distribution, and cost reductions. Through mathematical models, our work focuses on optimizing the allocation of users in base stations of telecommunication networks, minimizing the handover of users between base stations, and improving network communication quality. The contributions of this work are: i) A mathematical model for allocating users of mobile networks at base stations, also aiming handover reduction; ii) A metaheuristic solution as an alternative to exact models, since exact models can prove to be non-scalable, and present unfeasible solving times under computationally restricted conditions; iii) A model evaluation in simulated mobility scenarios considering the handover process behavior and the network user distribution according to available bandwidth. Our allocation model considers the average handover frequency of each base station and the Reference Signal Received Quality (RSRQ) indicator between users and base stations. The model evaluation used exact and heuristic methods, namely the branch-and-bound algorithm, tabu search, iterated local search, and a greedy solution. On average, the iterated local search algorithm obtained an execution time reduction of approximately 82% compared to the branch-and-bound exact algorithm. Regarding the RSRQ indicator, the solution reached a 1.45% average gain, and the number of performed handovers was maintained, compared to a similar literature model. Despite the modest improvement, which makes our proposal statistically equivalent to the literature model, we offer the advantage of not needing to predict the users' possible and future routes. Only the current position is required. Furthermore, our solution also considers base stations' bandwidth capacity, controlling the allocation and network occupation limits.

13
  • EMERSON MARTINS DA SILVA
  • A CROWDSOURCING MODEL FOR MANAGEMENT OF REGULATORY ACTS OF EDUCATION COUNCILS IN BRAZIL
  • Líder : ALAN PEDRO DA SILVA
  • MIEMBROS DE LA BANCA :
  • ALAN PEDRO DA SILVA
  • IVO AUGUSTO ANDRADE ROCHA CALADO
  • RANILSON OSCAR ARAUJO PAIVA
  • SEIJI ISOTANI
  • Data: 28-oct-2021


  • Resumen Espectáculo

  • Normative acts are general, abstract and impersonal legal norms that establish or suggest behaviors and have a normative load, that is, they establish norms, rules, standards or obligations [Meirelles et al., 1966; MJSP, 2017], they define the correct application of our law, thus preventing someone from doing something that the law prohibits, the rules of life in society without specifying the specific individual who will be affected by the standard [Politique, 2016].Por For example, Decree No. 9,057/17 is a normative act that regulates the offer of Courses in the Modality Distance Learning in the teaching categories provided for in Brazil. Another example of an act normative is Ordinance No. 1,657/2018, which establishes the authorization for the issuance of the Portfolio Driver's License (CNH) in electronic media, called CNH Digital. these acts normative decisions decide how our behavior and that of public and/or private entities and they are built by all of us as a society. The education councils are collegiate bodies of a normative, deliberative and consultative nature, which interpret, decide, according to their competences and attributions, the application of the educational legislation and propose suggestions for improving education systems teaching [CNE/CP, 1999]. They are responsible for preparing ordinances, opinions, resolutions, indications, technical notes and other documents generated by the education councils that contribute to the construction of public policies [da Glória Gohn, 2002]. They also have the purpose of providing dialogue, listening to the demands of the community and expressing themselves as the actions of higher education bodies, such as the Ministry of Education (MEC) and the National Council of Education (CNE), on education matters of interest from different sectors, being thus organized into spheres of action.

14
  • ANTHONY EMANOEL DE ALBUQUERQUE JATOBA
  • Multimodality CT/MRI Radiomics for Lung NoduleMalignancy Suspiciousness Classification

  • Líder : MARCELO COSTA OLIVEIRA
  • MIEMBROS DE LA BANCA :
  • MARCELO COSTA OLIVEIRA
  • PAULO MAZZONCINI DE AZEVEDO MARQUES
  • THALES MIRANDA DE ALMEIDA VIEIRA
  • Data: 29-oct-2021


  • Resumen Espectáculo
  • Lung cancer is the most common and deadly form of cancer, and its early diagnosis is de-cisive to the patient’s survival. Computed Tomography (CT) is the gold-standard imagingmodality for lung cancer management, but recent studies have shown the potential of Mag-netic Resonance Imaging (MRI) in lung cancer diagnosis and how combining multimodalitymedical images can yield better outcomes. In this study, we evaluated whether the combina-tion of CT and MRI scans from lung cancer patients can leverage a more precise malignancysuspiciousness classification. For such, we registered paired CT and MRI scans from 47 pa-tients, segmented the nodules in each modality, extracted radiomics features, and performedan experiment with an XGBoost classifier, evaluating models’ performance metrics across30 trials. The same experiment was performed with four sets of features: 1) CT-only; 2)MRI-only; 3) CT and MRI features; 4) CT/MRI fused images. Our results indicate that theimage fusion approach can yield significant AUC performance gains over the single modal-ities models, with an average AUC of0.794, but feature concatenation is not an adequatestrategy for dealing with multimodality data, as its average AUC of0.770didn’t indicate im-provement over the single modalities. Additionally, we observed that MRI, with an averageAUC of0.770, has shown significantly better performance than CT, with0.754, encourag-ing further studies in MRI as lung cancer management image modality. Finally, an analysison the importance of radiomics features reinforced the relevance of features that reflects onmorphological characteristics of a nodule, such as its dimension and roundness, as well astexture features that relate to the intratumoral environment, measuring its complexity andhomogeneity.

15
  • CARLA FABIANA GOMES DE SOUZA

  • TRAINING HIGH PERFORMANCE GROUPS AND AGILE METHODS IN PROJECT-BASED COLLABORATIVE LEARNING

  • Líder : ALAN PEDRO DA SILVA
  • MIEMBROS DE LA BANCA :
  • ALAN PEDRO DA SILVA
  • GEISER CHALCO CHALLCO
  • RACHEL CARLOS DUQUE REIS
  • RANILSON OSCAR ARAUJO PAIVA
  • Data: 29-oct-2021


  • Resumen Espectáculo
  • Learning communication and commitment skills is one of the main challenges that students face in teaching education [Kenski, 2008], whether in education (BNCC1, [SANTOS and FELICETTI, 2013]) or in higher education [Klozovski et al., 2015]. However, few people have adequate skills for teaching communication. and commitment [Bonotto and Felicetti, 2014] . An approach to supporting the teaching these skills is using agile methods and practices, which according to with Salza et al. [2019] have been shown to be effective in education, citing the example of education in Finland, in which agile methods and practices are used and which is considered one of the best educations in the world [Hazzan and Dubinsky, 2019]. Salza et al. [2019] define agile methods and practices as process structures that are used for software development and are based on values and principles, having as an example the Agile Manifesto, and intends to leave aside the traditional method and linear cascade for a method in which requirements and solutions are constantly modified according to customer's requests. The main focus of these methods is to give greater value to people than to processes, thus placing emphasis on talents and skills of individuals. Parsons and MacCallum [2018] indicate that the application of agile methods in software engineering, it avoids wasting resources, time and effort, favoring a iterative and workgroup-based approach. To get these same benefits in the context of education, different challenges must be overcome, for example, it is necessary define the concept of agile to be used and also indicate how educators will apply agile practices and methods in the teaching-learning process
16
  • LUCAS ANTONIO FERRO DO AMARAL
  • Skelibras: a large 2d skeleton dataset of dynamic brazilian signs

  • Líder : THALES MIRANDA DE ALMEIDA VIEIRA
  • MIEMBROS DE LA BANCA :
  • FABIANO PETRONETTO DO CARMO
  • MARCELO COSTA OLIVEIRA
  • THALES MIRANDA DE ALMEIDA VIEIRA
  • TIAGO FIGUEIREDO VIEIRA
  • Data: 29-oct-2021


  • Resumen Espectáculo
  • The recognition of dynamic signs of sign languages is a difficult task that is starting to become feasible with the use of deep neural networks. However, the absence of large annotated databases makes the training of these classifiers unfeasible. In this work, a database, entitled Skelibras, was built, containing $57760$ samples of $6572$ classes of dynamic signs of Brazilian Sign Language (Libras). Each sign contains sequences of skeletons (poses) of the body and hands, automatically extracted and synchronized from videos of the Corpus de Libras base. To extract and organize these annotated data consistently, a methodology capable of identifying and tracking the poses of each speaker is presented; synchronize subtitles and speakers present in conversations; and synchronize video data acquired from different points of view with subtitles. We performed experiments on variations of deep neural networks based on convolutional layers, dense layers, and LSTMs units to validate and provide preliminary results on the basis generated in this work, thus enabling future comparison with new dynamic signal recognition methods.

17
  • JULIO CESAR FERREIRA SILVA DE HOLANDA
  • Architecting Teacher Recommendations: Learning Analytics Applied to Video Lesson Improvement

  • Líder : ALAN PEDRO DA SILVA
  • MIEMBROS DE LA BANCA :
  • ALAN PEDRO DA SILVA
  • DIEGO DERMEVAL MEDEIROS DA CUNHA MATOS
  • RANILSON OSCAR ARAUJO PAIVA
  • RAFAEL FERREIRA LEITE DE MELLO
  • Data: 29-oct-2021


  • Resumen Espectáculo
  • Videos are the most popular digital media for education today, and video lessons are the main manifestation of this media. While the supply and demand for video lessons grows consistently, especially during the period of the COVID-19 pandemic, the capacity of a considerable portion of teachers to follow this trend does not follow the growth in a proportional way, or with satisfactory results. Producing video lessons is not a trivial task for a teacher, and it is even more difficult to validate the quality of the produced video lesson, due to the characteristics of a video and the subjectivity with which its quality is estimated. In this work, a video lessons validation system is proposed, which aims to generate recommendations for improvements in video lessons for teachers. The architected system is based on the definitions present in the literature about concepts of good practice for the production of video lessons. An experiment was carried out to evaluate what students and teachers consider as good practices in video lessons, as well as to evaluate the usefulness of three recommendations created to simulate the result generated by the proposed system. The data collected show that the technical aspects of the videos are highly considered by the preference of the participants in the experiment. Data from related experiments show that collecting participants' interactions with the video can help the teacher make better pedagogical decisions.

18
  • ALFREDO LIMA MOURA SILVA
  • Biased Random-Key Genetic Algorithms for the Minimum Broadcast Time Problem

  • Líder : RIAN GABRIEL SANTOS PINHEIRO
  • MIEMBROS DE LA BANCA :
  • FÁBIO PROTTI
  • ANDRE LUIZ LINS DE AQUINO
  • BRUNO COSTA E SILVA NOGUEIRA
  • RIAN GABRIEL SANTOS PINHEIRO
  • Data: 12-nov-2021


  • Resumen Espectáculo
  • The Minimum Broadcast Time (MBT) is a well-known data dissemination problem whose goal is to find a broadcast scheme that minimizes the number of steps needed to execute the broadcast operation. The Weighted Minimum Broadcast Time (WMBT) is a generalization of the MBT, such that each operation has a cost. Both problems have many applications in distributed systems and swarms robots. This work proposes Biased Random-Key Genetic Algorithms (BRKGA) and a hybrid algorithm (BRKGA + Integer Linear Programming) for the MBT and WMBT. We carry out experiments with our BRKGA on instances commonly used in the literature, and also on massive synthetic instances (up to 1000 vertices), allowing us to cover many possibilities of real industry topologies. Our proposal is also compared with state-of-the-art exact methods and heuristics. Experimental results show that our algorithms are able to outperform the best-known heuristics for the MBT and WMBT, and also that they are a very good alternative for large instances that cannot be solved by current exact methods.

19
  • TIAGO LIMA MARINHO
  • Optimization of XGBoost hyperparameters using meta-learning


  • Líder : BRUNO ALMEIDA PIMENTEL
  • MIEMBROS DE LA BANCA :
  • BRUNO ALMEIDA PIMENTEL
  • EVANDRO DE BARROS COSTA
  • ROBERTA VILHENA VIEIRA LOPES
  • DIEGO CARVALHO DO NASCIMENTO
  • Data: 16-dic-2021


  • Resumen Espectáculo
  • With the growth of machine learning algorithms in recent times, both in the sense of their optimization, as well as in the emergence of new algorithms for problems but for classification and regression, more attention was paid to "which algorithm has better accuracy in a given data set". In parallel, there is also the problem of finding hyperparameters for each algorithm in order to increase the precision in the result of each one of them; which is not a trivial task. This ends up affecting companies in some way, as they are having to deal with the huge growth of data and without adjusting their models to deal with that data. An example of a machine learning algorithm that emerged in 2014 and which is also related to this type of problem, is XGBoost. In general, the algorithms have a large number of hyperparameters and depending on the size of the data set that the algorithm will be processed, this can be a very expensive task; both in terms of memory and complexity. The same happens with XGBoost, to use it both for regression and classification, it is necessary to configure a large number of hyperparameters and due to some heuristics used, it makes the algorithm a little more expensive than the others. . Therefore, it is evident that optimization is necessary in this optimization step for this type of algorithm. Having as a problem to solve this problem, a meta-learning uses an idea of experience, that is, if the algorithm was developed on a set of data and a good combination of hyperparameters for a certain set of data has already been found; a meta-learning aims to use these hyperparameters already found in some data set with similar characteristics, reducing the cost of finding a new combination of hyperparameters using a grid search, for example, where it is necessary to test a combination of hyperparameters which can become a costly task, mainly dependent on the size of the data set. Thus, having the advantage, when applying a deep search, test only with hyperparameters that have already been found in other data sets. Finally, this work aims to create a model for the use of meta-learningnew data sets; in order for there to be a reduction in computational costs, also cost reduction for companies, since a machine learning area it is growing more and more. For now, experiments are being carried out around 198 data sets, finding hyperparameters that give better results than the standard hyperparameters in the literature and comparing the results using the model that uses a meta-learning.


2020
Disertaciones
1
  • RODRIGO DOS SANTOS LIMA
  • Understanding and Classifying Code Harmfulness

  • Líder : BALDOINO FONSECA DOS SANTOS NETO
  • MIEMBROS DE LA BANCA :
  • BALDOINO FONSECA DOS SANTOS NETO
  • LEOPOLDO MOTTA TEIXEIRA
  • MARCIO DE MEDEIROS RIBEIRO
  • Data: 28-feb-2020


  • Resumen Espectáculo
  • A presença de code smells indica uma má escolha de implementação e geralmente piora a qualidade do software. Portanto, a detecção de code smells é uma técnica simples para identificar oportunidades de refatoração em sistemas de software. Neste contexto, estudos anteriores desenvolveram ferramentas de detecção de code smells, fornecendo resultados diferentes. No entanto, tais ferramentas apresentam algumas limitações porque os code smells podem ser subjetivamente interpretados e detectados de maneiras diferentes. Para superar essas limitações, usamos diferentes técnicas de aprendizado de máquina para classificar os códigos prejudiciais. Um código prejudicial é um fator essencial a ser levado em consideração ao se reportar os resultados de detecção de code smells, uma vez que os códigos maliciosos contêm bugs, fornecendo a priorização dos esforços de refatoração. Em nosso experimento, aplicamos algoritmos de aprendizado de máquina diferentes em projetos de código aberto para detectar códigos prejudiciais que exploram diferentes recursos e técnicas.

2
  • FELIPE CARMO CRISPIM
  • Reconhecimento Facial 3D para Análise de Parentesco

  • Líder : TIAGO FIGUEIREDO VIEIRA
  • MIEMBROS DE LA BANCA :
  • DOUGLAS CEDRIM OLIVEIRA
  • THALES MIRANDA DE ALMEIDA VIEIRA
  • TIAGO FIGUEIREDO VIEIRA
  • Data: 13-mar-2020


  • Resumen Espectáculo
  • Este trabalho apresenta uma abordagem inédita de reconhecimento de parentesco baseada em Aprendizado Profundo aplicado a dados faciais de imagens coloridas e com informação de profundidade, i. e., RGBD. Para contornar a falta de uma base de dados 3D adequada com informações de parentesco, foi fornecida uma plataforma online onde os participantes podem submeter vídeos capturados com câmeras de \textit{smartphones} comuns contendo a sua face e as de seus parentes. Em seguida, os vídeos são processados para a reconstrução 3D das faces gravadas, gerando um banco de dados normalizado batizado Kin3D. Nele, combinam-se informações de profundidade de reconstruções 3D normalizadas com imagens 2D, compondo o banco de dados RGBD de parentesco inédito na literatura. Seguindo as abordagens de trabalhos relacionados, imagens são organizadas em quatro categorias de acordo com suas respectivas relações de parentesco. Para a classificação foram utilizadas Redes Neurais Convolucionais (RNC) bem como Máquina de Vetores de Suporte para a obtenção de um \textit{baseline}. A RNC foi testada em um banco de dados de parentesco 2D previamente consolidado na literatura científica, conhecido como KinFaceW-I e II, e em nosso Kin3D para comparação com trabalhos relacionados. Uma outra abordagem foi usada ao reunir todos os parentes de primeiro grau de uma vez e classificá-los de maneira binária. Resultados indicam que a adição de informação de profundidade aprimora a performance do modelo, aumentando a acurácia de classificação. Até o momento da escrita desse trabalho, este é o primeiro banco de dados contendo informação de profundidade para verificação de parentesco bem como a análise de técnicas do estado da arte para a obtenção do \textit{benchmark}, fornecendo uma performance como ponto de partida para estimular ainda mais avaliações da comunidade de pesquisa.

3
  • FRANCISCO DALTON BARBOSA DIAS
  • É uma Exceção Testar Comportamento Excepcional? Uma Avaliação Empírica Utilizando Testes Automatizados em Java

  • Líder : MARCIO DE MEDEIROS RIBEIRO
  • MIEMBROS DE LA BANCA :
  • MARCIO DE MEDEIROS RIBEIRO
  • BALDOINO FONSECA DOS SANTOS NETO
  • ROHIT GHEYI
  • Data: 29-may-2020


  • Resumen Espectáculo
  • O teste de software é uma atividade crucial para verificar a qualidade interna de um software. Durante os testes, os programadores frequentemente criam testes para o comportamento normal de uma determinada funcionalidade (por exemplo, um documento foi corretamente enviado para a nuvem?). Contudo, pouco se sabe se os programadores também criam testes para o comportamento excepcional (por exemplo, o que acontece se a rede falhar durante o envio do arquivo?). Para minimizar esta falha de conhecimento, neste artigo desenhamos e realizamos um estudo de método misto para entender como 417 projetos Java de código aberto estão testando o comportamento excepcional usando os frameworks JUnit e TestNG, e a biblioteca AssertJ. Verificamos que 254 (60,91%) projetos têm pelo menos um método de teste dedicado a testar o comportamento excepcional. Verificamos também que o número de métodos de teste de comportamento excepcional em relação ao número total de métodos de teste se situa entre 0% e 10% em 317 (76,02%) projetos. Além disso, 239 (57,31%) projetos testam apenas até 10% das exceções utilizadas no sistema em teste (System Under Test - SUT). Quando se trata de aplicações móveis, descobrimos que, em geral, os programadores prestam menos atenção aos testes de comportamento excepcionais quando comparados com os programadores de desktop/servidor e multiplataforma. Em geral, encontramos mais métodos de teste abrangendo exceções personalizadas (as criadas no próprio projeto) quando comparadas com as excepções padrão disponíveis no Java Development Kit (JDK) ou em bibliotecas de terceiros. Para triangular os resultados, realizamos uma pesquisa com 66 programadores dos projetos que estudamos. Em geral, os resultados da pesquisa confirmam os nossos resultados. Em particular, a maioria dos participantes concorda que os programadores negligenciam frequentemente testes de comportamento excepcionais. Como implicações, os nossos números podem ser importantes para alertar os desenvolvedores de que devem ser empregados mais esforços na criação de testes para o comportamento excepcional.

4
  • JOAO VICTOR DE LIMA MOURA
  • MODELO DE PREDIÇÃO DA BOLSA DE VALORES BASEADO EM DEEP LEARNING E MINERAÇÃO DE DADOS.
  • Líder : XU YANG
  • MIEMBROS DE LA BANCA :
  • EVANDRO DE BARROS COSTA
  • XU YANG
  • YURI SAPORITO
  • Data: 27-ago-2020


  • Resumen Espectáculo
  • Predizer o futuro é algo que a humanidade anseia, podendo preparar-se para as eventualidades que possam vir a ocorrer. Hoje, a facilidade de acesso a informações através de notícias e com o uso de programas que “leem” os sentimentos dos usuários na internet, é possível usar a linguagem computacional para tentar predizer o comportamento do mercado. No âmbito da economia, essa predição possibilita a oportunidade de novas formas de intervir no mercado econômico, possibilitando o vislumbre de cenários, o que pode facilitar a tomada de decisão sobre investimentos e o futuro econômico de empresários, empresas e governos. As ferramentas de inteligência artificial vem se tornando uma boa ponte entre as técnicas de previsão e as técnicas de engenharia úteis na abordagem do problema, apresentando tratamentos quantitativos e análises de limitações. Essa trabalho visa desenvolver um modelo de predição da bolsa de valores Brasileira baseando-se em mineração de notícias, e para isso, fará uso de técnicas como deep Learning, redes neurais artificiais, Processamento de linguagem natural para predizer o comportamento de ativos da Petrobras e itaú Unibanco. Vale ressaltar que o modelo proposto tem como finalidade ser uma ferramenta de auxilio de tomada de decisão e não uma ferramenta que deve ser levada com 100% de certeza. 

5
  • LUIZ FELIPE SALES MACEDO BARBOSA
  • Multimodal Extraction of Clothing Characteristics for Recommendation Systems Using Deep Neural Networks
  • Líder : THALES MIRANDA DE ALMEIDA VIEIRA
  • MIEMBROS DE LA BANCA :
  • THALES MIRANDA DE ALMEIDA VIEIRA
  • EVANDRO DE BARROS COSTA
  • BRUNO ALMEIDA PIMENTEL
  • JOSÉ ANTÃO BELTRÃO MOURA
  • Data: 28-ago-2020


  • Resumen Espectáculo
  • This work aims to assist in the identification of clothing attributes using a multimodal Deep
    Learning strategy.We propose the use of images and unstructured textual descriptions to organize
    clothing items catalogs. Such types of data are employed to train deep neural network architectures
    in multi-class classification problems, which are then able to automatically recognize
    attributes from these two types of data commonly found in e-commerce environments. Three
    classes of architecture were experimented: variations of the VGG architecture for recognition
    from images; architectures combining embedding, convolutional and recurrent layers for text
    recognition; and hybrid architectures that combine elements from each of the previous architectures.
    Using a database that we collected through a Web Crawler from a large e-commerce site,
    we show in our experiments that hybrid architectures achieve a better result in the classification
    task by combining both types of data. Our methodology makes it possible to feed clothing
    recommendation systems, due to the possibility of compiling and structuring the catalog data,
    in addition to allowing the detection of insufficient visual and textual descriptions for a given
    attribute that can be improved, when unimodal classifiers fail to recognize this attribute.

6
  • ANDRESSA CARVALHO MELO DA SILVEIRA
  • UM SISTEMA INTELIGENTE PARA A AVALIAÇÃO DE RISCO DA DRC E ENCAMINHAMENTO DE PACIENTES EM EMERGÊNCIA PARA UNIDADES DE SAÚDE

  • Líder : LEANDRO DIAS DA SILVA
  • MIEMBROS DE LA BANCA :
  • ALVARO ALVARES VDE CARVALHO CESAR SOBRINHO
  • ANGELO PERKUSICH
  • EVANDRO DE BARROS COSTA
  • IVO AUGUSTO ANDRADE ROCHA CALADO
  • LEANDRO DIAS DA SILVA
  • Data: 28-ago-2020


  • Resumen Espectáculo
  • A alta incidência e prevalência de doença renal crônica (DRC), frequentemente causada
    por diagnósticos tardios, é um problema crítico de saúde pública, principalmente em países em desenvolvimento como o Brasil. As terapias de tratamento da DRC, como diálise e transplante renal, aumentam as taxas de morbimortalidade, além dos custos de saúde pública. Inicialmente, neste estudo foi analisado o uso de técnicas de aprendizado de máquina para auxiliar no diagnóstico precoce da DRC nos países em desenvolvimento. As análises comparativas qualitativas e quantitativas foram, respectivamente, conduzidas usando uma revisão sistemática da literatura e um experimento com técnicas de aprendizado de máquina, com o método de validação cruzada k-fold baseado no software Weka c e um conjunto de dados da DRC. A partir das análises, foi possível realizar uma discussão sobre a adequação das técnicas de aprendizado de máquina para a triagem do risco de DRC, concentrando-se em ambientes de baixa renda e de difícil acesso nos países em desenvolvimento, devido aos problemas específicos enfrentados, como, por exemplo, atendimento primário inadequado. Com base nos resultados do estudo foi possível observar que a árvore de decisão J48 é uma técnica de aprendizado de máquina adequada para essa triagem nos países em desenvolvimento, devido à fácil interpretação de resultados de classificação, com 95,00% de precisão, alcançando concordância quase perfeita com a opinião de um nefrologista experiente. Por outro lado, as técnicas de floresta aleatória, naive Bayes, máquina de vetores de suporte, perceptron multicamada e vizinho k mais próximo, respectivamente, ofereceram 93,33%, 88,33%, 76,66%, 75,00% e 71,67% de precisão, apresentando pelo menos concordância moderada com o nefrologista, à custa de uma interpretação mais difícil dos resultados da classificação. Com esta conclusão, a árvore de decisão J48 foi usada para desenvolver um sistema inteligente para avaliar o risco de DRC em países em desenvolvimento. Além disso, quando o paciente com DRC está fora de seu município e ocorre uma emergência, o sistema recomenda que ele compareça a uma unidade de saúde apropriada, dependendo da situação clínica, para evitar cuidados de saúde tardios ou inadequados.

7
  • IALLY CRISTINA SILVEIRA DE ALMEIDA
  • UMA ANÁLISE COMPARATIVA DE ABORDAGENS PARA A GERAÇÃO DE TESTES ABSTRATOS BASEADA EM
    MODELOS DE REDES DE PETRI COLORIDAS

  • Líder : LEANDRO DIAS DA SILVA
  • MIEMBROS DE LA BANCA :
  • ALVARO ALVARES VDE CARVALHO CESAR SOBRINHO
  • ANGELO PERKUSICH
  • LEANDRO DIAS DA SILVA
  • LEONARDO MELO DE MEDEIROS
  • Lenardo Chaves e Silva
  • Data: 28-ago-2020


  • Resumen Espectáculo
  • O teste baseado em modelos (Model-Based Tesing - MBT) baseia-se em modelos do comportamento do sistema para gerar testes abstratos. Testadores reutilizam especificações formais (e.g., modelos de redes de Petri coloridas (Coloured Petri Nets - CPN)) para projetar testes para sistemas críticos de segurança. Neste trabalho, por uma revisão terciária, foi identificado um número considerável de revisões de literatura focadas na análise do uso de linguagens de especificação para realizar MBT. Entretanto, ainda existe uma lacuna de pesquisa em relação à análise de abordagens baseadas em CPN para geração de testes abstratos. Para preencher essa lacuna de pesquisa, neste trabalho, uma análise comparativa de abordagens para geração de testes abstratos com base em modelos CPN foi também realizada, por meio de uma revisão sistemática da literatura e um estudo de caso sobre sistemas médicos: eletrocardiografia e bomba de infusão de insulina. A partir da análise comparativa, são fornecidas informações para os testadores que precisam selecionar a abordagem de geração de testes abstratos mais adequada ao aplicar MBT usando CPN. Com os resultados obtidos, é possível identificar que a escolha depende do tamanho do espaço de estados do modelo CPN.

8
  • JEAN BARROS TEIXEIRA
  • IDENTIFICAÇÃO AUTOMÁTICA DA PRESENÇA SOCIAL EM DISCUSSÕES ONLINE ESCRITAS EM PORTUGUÊS

  • Líder : EVANDRO DE BARROS COSTA
  • MIEMBROS DE LA BANCA :
  • EVANDRO DE BARROS COSTA
  • PATRICK HENRIQUE DA SILVA BRITO
  • RAFAEL FERREIRA LEITE DE MELLO
  • RODRIGO LINS RODRIGUES
  • Data: 28-ago-2020


  • Resumen Espectáculo
  • Esta dissertação de mestrado apresenta um método que permite a análise automatizada de mensagens provenientes de fóruns online de ensino a distância escritas em português brasileiro. Particularmente, analisa o problema da codificação de mensagens de discussão para níveis de presença social, um importante construto do modelo de Comunidade de Investigação amplamente utilizado na aprendizagem online. Apesar de existirem técnicas de codificação para a presença social na língua inglesa, a literatura ainda é carente em métodos para as demais línguas, como o português. O método aqui proposto utiliza-se de um conjunto de 158 características extraídas de dois recursos, LIWC e Coh-Metrix, disponíveis para análise textual através de técnicas de Mineração de Texto, para criar um classificador para cada uma das três categorias da presença social. Para isso foram utilizados três tipos de algoritmos, Random Forest, AdaBoost e XGBoost onde o melhor modelo desenvolvido atingiu 85,68% de acurácia e índice Kappa (k) de 0,70, o que representa uma concordância substancial, e está bem acima do grau de puro acaso. Este trabalho também provê uma análise da natureza da presença social, observando as características de classificação que foram mais relevantes para distinguir as três categorias da presença social e uma análise comparativa sobre as principais características identificadas nas fases da presença em diferentes domínios.

9
  • JOSE HENRICK VIANA RAMALHO
  • Omnino: plataforma aberta de robô omnidirecional para enxame

  • Líder : HEITOR JUDISS SAVINO
  • MIEMBROS DE LA BANCA :
  • ARMANDO ALVES NETO
  • ERICK DE ANDRADE BARBOZA
  • HEITOR JUDISS SAVINO
  • ICARO BEZERRA QUEIROZ DE ARAUJO
  • Data: 31-ago-2020


  • Resumen Espectáculo
  • Este trabalho apresenta uma plataforma aberta de um robô para atividades de pesquisa e educacionais. A plataforma é definida como um robô omnidirecional, que é um dos primeiros passos no currículo de robótica quando se deseja controlar um robô, dado que este não impões restrições ao movimento num plano. Os robôs são projetados para serem de baixo custo e com acesso aberto ao software e hardware. A maior parte do projeto mecânico també é fácil de imprimir nas impressoras 3D comumente usadas. O robô é integrado com a plataforma ROS - Robot Operating Systems, amplamente utilizada na comunidade de robótica, e o modelo de simulação em ROS-Gazebo é fornecido para permitir uma passagem fácil do ambiente de simulação ao ambiente experimental. Experimentos reais com os robôs propostos são mostrados, assim como a plataforma de simulação com múltiplos robôs.

10
  • EDUARDO FELIPE DE SOUZA
  • MMI-GAN: Multi Medical Imaging Translation using GenerativeAdversarial Network

  • Líder : MARCELO COSTA OLIVEIRA
  • MIEMBROS DE LA BANCA :
  • MARCELO COSTA OLIVEIRA
  • TIAGO FIGUEIREDO VIEIRA
  • PAULO MAZZONCINI DE AZEVEDO MARQUES
  • Data: 23-nov-2020


  • Resumen Espectáculo
  • Medical image translation is considered a new frontier in the field of medical image analysis, with great potential for application. However, the existing approaches have limited scalability and robustness in handling more than two domains of images, since different models must be created independently for each pair of domains. In addition, there are problems with the quality of the translated images. To solve these limitations, we developed MMI-GAN, a new approach for translation between multiple image domains, capable of translating intermodal (CT and RM) and intramodal (PD, T1 and T2) images. We demonstrate the efficiency of the conditional in learning to map between several domains using only a single generator and a discriminator, training data from images of all domains. We propose a GAN architecture that can be easily extended to other translation tasks for the benefit of the medical imaging community. The MMI-GAN is based on recent advances in the area of GANs (Generative Adversarial Network), using an adversary structure with a new combination of non-adversarial losses, which allows the simultaneous training of several data sets with different domains in the same network, as well as the innovative capacity to translate with flexibility between and intramodalities.

11
  • BENEDITO FERNANDO ALBUQUERQUE DE OLIVEIRA
  • Atoms of Confusion Do Really Cause Confusion? A Controlled Experiment Using Eye Tracking
  • Líder : MARCIO DE MEDEIROS RIBEIRO
  • MIEMBROS DE LA BANCA :
  • MARCIO DE MEDEIROS RIBEIRO
  • ALAN PEDRO DA SILVA
  • LEOPOLDO MOTTA TEIXEIRA
  • Data: 26-nov-2020


  • Resumen Espectáculo
  • Code comprehension is crucial in software maintenance activities, though it can be hindered by misunderstandings and confusion patterns, namely, atoms of confusion. They are small pieces of code using specific programming language constructs, such as Conditional Operators and Comma Operators. A previous study showed that these atoms of confusion impact developers' performance, i.e., time and accuracy, and increase code misunderstandings. However, empirical knowledge of the impact of such atoms on code comprehension is still scarce, especially when it comes to analyzing that impact on developers' visual attention. The present study evaluates whether developers misunderstand the code in the presence of atoms of confusion with an eye tracker. For this purpose, we measure time, accuracy, and analyze the distribution of visual attention. We conduct a controlled experiment with 30 students and software practitioners. We ask the subjects to specify the output of three tasks with atoms and three without atoms randomly assigned using a Latin Square design. We use an eye-tracking camera to detect the visual attention of the participants while solving the tasks. From an aggregated perspective, we observed an increase by 43.02% in time and 36.8% in gaze transitions in code snippets with atoms. For accuracy, no statistically significant difference was observed. We also confirm that the regions that receive most of the eye attention were the regions with atoms. Our findings reinforce that atoms hinder developers’ performance and comprehension. So, developers should avoid writing code with them.

12
  • BRUNO HENRIQUE LIRA DOS ANJOS
  • PREDCGAN: An approach to synthetic lung nodule generation with the use of Pre-Training.

  • Líder : MARCELO COSTA OLIVEIRA
  • MIEMBROS DE LA BANCA :
  • MARCELO COSTA OLIVEIRA
  • THALES MIRANDA DE ALMEIDA VIEIRA
  • PAULO MAZZONCINI DE AZEVEDO MARQUES
  • Data: 27-nov-2020


  • Resumen Espectáculo
  • Early treatment and detection of lung cancer is important. However, the classification of nodules by neural convolutional network using few real computed tomography images is a difficult process. To work around this problem, this work proposes PREDCGAN. In it we add a pre-training in the pipeline of a generative adversarial network for the generation of pulmonary nodules. As a result, the use as a base increase of these synthetic images in conjunction with classical techniques to classify pulmonary nodules was found to be a value of 0.791 AUC, being the best value compared to other methods found in this work. 

13
  • MATHEUS SOARES DE ARAÚJO
  • Reliability Analysis of Monitor Multiparameter Used in Units of Intensive Therapy

  • Líder : LEANDRO DIAS DA SILVA
  • MIEMBROS DE LA BANCA :
  • LEANDRO DIAS DA SILVA
  • ALVARO ALVARES VDE CARVALHO CESAR SOBRINHO
  • THIAGO DAMASCENO CORDEIRO
  • LEONARDO MONTECCHI
  • GILBERTO FRANCISCO MARTHA DE SOUZA
  • Data: 30-nov-2020


  • Resumen Espectáculo
  • A multi-parameter monitoring system is usually applied to keep track of the clinical condition of patients in Intensive Care Units (ICUs). ICUs are continuous monitoring environments used to host patients in serious health conditions. These Systems of Systems (SoS), used in ICUs comprise of a set of Constituent Systems (CS) to measure parameters such as heart rate, respiratory frequency, and temperature. Due to the critical nature and relevance of ICUs, such SoS shall be as reliable as possible. This is specially true in emergency situations, as is the case of the COVID-19 outbreak, which resulted in the burden of health care systems. Multi-parameter monitoring systems in ICUs shall have high levels of reliability, given that failures can provide risks to the safety of patients. We performed relia- bility analysis and provided insights to assist the management of multi-parameter monitoring systems used in ICUs, also considering preventive maintenance. We elicited requirements by interviewing a professional with more than fifteen years of hospital experience in maintenance. In addition, we analyzed existing systems and literature reviews. Therefore, we modeled multi-parameter monitoring systems for ICUs using the CHESS-ML modeling language. Afterward, we conducted reliability analysis by applying the CHESS-SBA plugin state-based analysis, simulating different scenarios. Based on the reliability analysis, we identified that main power supply and the battery are the CS that present major negative impacts in the total reliability the entire system in failure situations. In emergency situations, reduced time ranges of preventive maintenance, when applied during a short period, showed to be promising strategies to increase quality of multi-parameter monitoring systems for ICUs.

SIGAA | NTI - Núcleo de Tecnologia da Informação - (82) 3214-1015 | Copyright © 2006-2024 - UFAL - sig-app-1.srv1inst1 28/04/2024 16:55