ifi

The Lunchtime Seminar series is intended as a forum for internal presentations of recently completed and ongoing research. It takes place between 12 and 1 PM during term. A light buffet lunch is offered to all attendants who arrive in time. Talks start at 12:15 and last 30 minutes, leaving 15 minutes for Q&A.

________________________________________________________________________________________

Announced Events:

________________________________________________________________________________________

Next Lunchtime Seminar will be on Thu, 9th of March.

________________________________________________________________________________________

________________________________________________________________________________________

Previous Events:

________________________________________________________________________________________

Preoperative planning for rigid and non-rigid conditions for image-guided minimally invasive surgery

Lecturer:
Noura Hamze
Postdoctoral researcher at IGS group, University of Innsbruck

Date: Thursday, 26th of January 2017, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:

Image-guided minimally invasive surgery is becoming very common in hospitals today. Despite its advantages compared to conventional open surgery, its major difficulties are the reduced visibility inside the body, and the limited maneuvering of surgical tools. Therefore, a precise preoperative planning of the surgical tools trajectories is a key factor to a successful intervention. In this talk, I will present our previous work on preoperative path planning for surgical tools, and show how we could increase intervention safety levels by considering intra-operative deformation during the preoperative planning phase. Our methods combine geometry-based optimization techniques with physics-based simulations. The developed techniques are widely applicable; examples of two different surgical procedures will be shown: percutaneous procedures for hepatic tumor thermal ablation, and neurosurgical deep brain stimulation. Finally, I will also briefly outline our ongoing and future research on forearm orthopedic surgery planning in the Interactive Graphics and Simulation Group.

________________________________________________________________________________________

Strong Modular Proof Assistance: Reasoning across Theories

Lecturer:
Cezary Kaliszyk
Research assistant at CL group, University of Innsbruck

Date: Thursday, 19th of January 2017, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:

As proofs of correctness of programs become more important in modern complex designs, automatically providing proof advice becomes a principal challenge. The strongest general purpose advice and automation for formal proofs is today provided by learning-reasoning systems called hammers. In this talk we will discuss several limitations of the current early generation of hammer systems and discuss new AI methods that will combine the knowledge and the reasoning techniques present in the current systems into a smart learning and reasoning system working over a large part of today’s body of formalized knowledge. We will also show how the uniform learning methods and encoding components generalize advice for different proof assistants into a general advice system for semi-formal and informal proofs.

________________________________________________________________________________________

Discriminative models for multi-instance problems with tree-structure

Lecturer:
Tomas Pevny
researcher at Agent Technology Center (ATC), Czech Technical University, Prague

Date: Thursday, 12th of January 2017, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsb

Abstract:

Modeling network traffic is gaining importance in order to counter modern threats of ever increasing sophistication. It is though surprisingly difficult and costly to construct reliable classifiers on top of telemetry data due to the variety and complexity of signals that no human can manage to interpret in full. Obtaining training data with sufficiently large and variable body of labels can thus be seen as prohibitive problem. The goal of this work is to detect infected computers by observing their HTTP(S) traffic collected from network sensors, which are typically proxy servers or network firewalls, while relying on only minimal human input in model training phase. We propose a discriminative model that makes decisions based on all computer’s traffic observed during predefined time window (5 minutes in our case). The model is trained on collected traffic samples over equally sized time window per large number of computers, where the only labels needed are human verdicts about the computer as a whole (presumed infected vs. presumed clean). As part of training the model itself recognizes discriminative patterns in traffic targeted to individual servers and constructs the final high-level classifier on top of them. We show the classifier to perform with very high precision, while the learned traffic patterns can be interpreted as Indicators of Compromise. In the following we implement the discriminative model as a neural network with special structure reflecting two stacked multi-instance problems. The main advantages of the proposed configuration include not only improved accuracy and ability to learn from gross labels, but also automatic learning of server types (together with their detectors) which are typically visited by infected computers.

________________________________________________________________________________________

Development of a Risk-Based Test Strategy and its Industrial Evaluation

Lecturer:
Priv.-Doz. Dr. Michael Felderer
Senior Researcher at QE group, University of Innsbruck

Date: Thursday, 15th of December, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Risk-based testing has a high potential to improve the software test process as it helps to optimize the allocation of resources and provides decision support for the management. But for many organizations the integration of risk-based testing into an existing test process is a challenging task. An essential first step when introducing risk-based testing in an organization is to establish a risk-based test strategy which considers risks as the guiding factor to support all testing activities in the entire software lifecycle. In this presentation we provide an overview of risk-based testing and present a process for risk-based test strategy development and refinement. The process has been created as part of a research transfer project on risk-based testing that provided the opportunity to get direct feedback from industry and to evaluate the ease of use, usefulness and representativeness of each process step together with five software development companies. Furthermore, we present an outlook on ongoing research on the integration of defect prediction and risk-based testing.
________________________________________________________________________________________

A One-for-All Exams Generator: Written Exams, Online Tests, and Live Quizzes with R

Lecturer:
Univ.-Prof. Dr. Achim Zeileis
Professor at Department of Statistics, University of Innsbruck

Date: Thursday, 1st of December, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
A common challenge in large-scale courses is that many variations of similar exercises are needed for written exams, online tests conducted in learning management systems (such as Moodle, OLAT, Blackboard, etc.), or live quizzes with voting via smartphones or tablets. Here, we introduce a set of open-source tools – tied together by the R package “exams”
(https://CRAN.R-project.org/package=exams) – which facilitate these tasks. The package is based on individual exercises that are either in R/LaTeX or R/Markdown format and can contain questions/solutions with some random numbers, text snippets, or even individualized datasets. The exercises can be combined to exams and easily rendered into a number of output formats including PDF, HTML, XML for Moodle or OLAT, etc. It will be illustrated how the Department of Statistics at Universität Innsbruck manages its large statistics and mathematics courses using PDF exams that can be automatically scanned and evaluated, online tests in the OpenOLAT learning management system, and live quizzes in the ARSnova audience response system.
________________________________________________________________________________________

Reliable Analysis of Functional Logic Programs

Lecturer:
Thomas Sternagel
Research assistant at CL group, University of Innsbruck

Date: Thursday, 17th of November, 12:00 – 1:00

Venue: 3W03, ICT Building, 2nd floor, Technikerstraße 21a, 6020 Innsbruck

Abstract:
More and more often computer programs run in parallel on multiple cores, CPUs, or even distributed systems. The likelihood of errors in a program increases with its complexity. If we are lucky these errors do not show up in practice, but more likely they will surface at some point and cause negative financial impact, the loss of life, or both. Testing parallel programs is becoming more difficult and clearly not enough.
What we want are methods to prove properties of programs formally and automatically. In this regard already the choice of programming language is crucial. We will go for functional logic programs to preclude certain kinds of concurrency problems from the start. A suitable model of computation for functional logic programs is conditional term rewriting.
This talk will be about how to formalize, implement, and certify methods for conditional term rewriting in order to check certain properties of functional logic programs..
________________________________________________________________________________________

From Plagiarism Detection to Bible Analysis: The Potential of Grammar-Based Text Analysis

Lecturer:
Dr. Michael Tschuggnall
Postdoctoral Researcher at DBIS, University of Innsbruck

Date: Thursday, 10th of November, 12:00 – 1:00

Venue:SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
The amount of textual data available from digitalized sources such as free online libraries or social media posts has increased drastically in the last decade.
As one consequence, it becomes easier for a plagiarist to find suitable sources, where on the other side it gets harder for automated tools to detect fraud.
This talk gives an overview of how textual analysis can help to reveal potential plagiarism, i.e., by inspecting the grammatical writing style of authors.
Moreover, related tasks like authorship attribution or author profiling can be tackled using similar algorithms, which aim to identify the writer of a document or
to extract meta information like gender and age by investigating the writing style.
Finally, also analyses on the original Bible writings in Old Hebrew were conducted, revealing promising results.
________________________________________________________________________________________

Skill learning by robotic playing

Lecturer:
Simon Hangl
Research assistant at IIS group, University of Innsbruck

Date: Thursday, 3rd of November, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Robots are widely used in industry. However, they only work well in highly restricted and controlled environments, in which it is relatively easy to program the robots. The next step is to enable robots to work in unstructured environments, in which the applications are countless (e.g. household robots). One important and still unsolved problem is (semi-) autonomous skill acquisition. Current approaches mostly require a big set of training samples and/or high task priors in the learning method itself. We investigated a method for robotic playing for autonomous skill acquisition that can be applied in unstructured environments. We further introduce concepts like boredom, creativity or curiosity to robots in order to guide them during the learning process.
________________________________________________________________________________________

Uncertainity in Workflow Scheduling and Execution in the Cloud

Lecturer:
Dr. Sashko Ristov
Postdoctoral Researcher at DPS group, University of Innsbruck

Date: Thursday, 27th of October, 12:00 – 1:00

Venue:SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
The performance of cloud resources is uncertain because of elastic resource provisioning and unstable performance of multitenant VMs over time.
This reflects the performance of workflow applications even more due to the data and control dependencies and opens two challenges that we will present:
Modeling the uncertainty in workflow scheduling and workflow execution. Our scheduling model improves the estimation of the Pareto optimal set of scheduling solutions that resist against fluctuations in processing times.
Additionally, the workflow execution model shows closer simulation than state of the art with simpler configuration of a simulator.
________________________________________________________________________________________

Multimedia forensics: a deterministic approach

Lecturer:
Dr. Cecilia Pasquini
Postdoctoral Researcher at SEC group, University of Innsbruck

Date: Thursday, 20th of October, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
The increasing availability and pervasiveness of multimedia data, coupled with the easy access to user-friendly editing software, motivates research on multimedia forensics, which develops forensic tools for verifying the authenticity of multimedia data. This is mostly done by studying traces left in the signal by any operation that could have been employed as post-processing, either for malicious purposes or simply to improve their content or presentation.
The majority of forensic approaches are based on statistical properties of the signal and operation considered. However, we explore the possibility of defining and exploiting in the forensic analysis properties that are deterministically related to a certain processing operation. With this respect, we present an approach targeted to the detection in 1D data of a common data smoothing operation, the median filter. The main peculiarity of this method is the ability of providing a deterministic response on the presence of median filtering traces in the data under investigation.

________________________________________________________________________________________

Tuning Task Parallelism: Granularity Control and Context-aware Optimization

Lecturer:
Dr. Peter Thoman
Postdoctoral Researcher at DPS group, University of Innsbruck

Date: Thursday, 13th of October, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
For many algorithms – including divide-and-conquer methods and branch-and-bound computations – nested task parallelism is a more natural fit than flat data parallelism. However, the latter is far more widely used in high-performance parallel programs at this point in time. We identify reasons for the current divide between parallelism theory and application development reality, and present some approaches to mitigate the underlying issues. The focus is on providing solutions which allow application programmers to focus on simply expressing the parallelism available in their algorithms, rather than concerning themselves with hardware-, system- and application-specific performance tuning. The methods presented include compiler optimizations, runtime system tuning, and context-aware parallel API design.

________________________________________________________________________________________

Implementing Threat Intelligence Sharing Platforms: Challenges and obstacles

Lecturer:
Christian Sillaber
Research assistant at QE group, University of Innsbruck

Date: Thursday, 6th of October, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
In the last couple of years, organizations have demonstrated an increased willingness to participate in threat intelligence sharing platforms. The exchange of information about threats, vulnerabilities, incidents and mitigation strategies results from the organizations’
growing need to collectively protect against today’s sophisticated cyber-attacks. However, the increasing amount of data that is shared via these platforms, multiple data sources to be integrated, a lack of proper quality controls as well as a frequent mismatch between the value proposition of such platforms and the organizations’ requirements lead to high friction in early stages of platform implementation. In a series of workshops and interviews we identified challenges early adopters of threat intelligence sharing platforms face and how they can be mitigated. The findings of the workshops and interviews show that the successful implementation of threat intelligence sharing platforms requires a good alignment between information system security risk management and the business environment, direct integration into the existing security management tool landscape and capable analysis mechanisms. We present results from an ongoing research project spotlighting data quality challenges in threat intelligence sharing platforms and related frictions organizations face when implementing such platforms

________________________________________________________________________________________

Analysing the Usage of Wikipedia on Twitter

Lecturer:
Dr. Eva Zangerle
Post doctoral researcher at DBIS group, University of Innsbruck

Date: Thursday, 23th of June, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Wikipedia is a central source of information as 450 million people consult the online encyclopaedia every month to satisfy their information needs. Some of these users also refer to Wikipedia within their tweets. Firstly, we analyze the usage of Wikipedia on Twitter by looking into languages used on both platforms, content features of posted articles and recent edits of those articles. Secondly, we analyse links within tweets referring to a Wikipedia of a language different from the tweet’s language. Therefore, we investigate causes for the usage of such inter-language links by comparing the tweeted article and its counterpart in the tweet’s language (if there is any) in terms of article quality. We find that the main cause for inter-language links is the non-existence of the article in the tweet’s language. Furthermore, we observe that the quality of the tweeted articles is constantly higher in comparison to their counterparts, suggesting that users choose the article of higher quality even when tweeting in another language. Moreover, we find that English is the most dominant target for inter-language links.
________________________________________________________________________________________

Certified Automated Confluence Analysis of Rewrite Systems

Lecturer:
Julian Nagele
Research assistant at CL group, University of Innsbruck

Date: Thursday, 16th of June, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Term rewriting is a simple yet Turing complete model of computation. Equipped with clear semantics it underlies much of declarative programming and automated reasoning. Arguably one of the most important properties of rewrite systems is confluence, which guarantees that computations are deterministic in the sense that any two diverging computation paths can be joined eventually, thus ensuring that results are unique. Although undecidable in general, much work has been spent on confluence analysis and recently powerful, automatic tools have been developed. However, with great power comes great software complexity, and consequently such tools may contain errors and produce wrong answers and proofs. The predominant solution is to develop independent, highly trusted certifiers that can be used to verify the proofs generated by untrusted tools. To ensure correctness of the certifier itself, its soundness is formally shown in a proof assistant.
This talk discusses automated confluence analysis of rewrite systems, with special focus on certification.
________________________________________________________________________________________

The Art of MPI Benchmarking

Lecturer:
Sascha Hunold
assistant professor at Research Group Parallel Computing, Vienna University of Technology

Date: Thursday, 9th of June, 12:00 – 1:00

Venue: HSB 7, Hörsaaltrakt-BI-Gebäude, Technikerstraße 13b, EG, 6020 Innsbruck

Abstract:
The Message Passing Interface (MPI) is the prevalent programming model used on current supercomputers. Therefore, MPI library developers are looking for the best possible performance (shortest run-time) of individual MPI functions across many different supercomputer architectures. Several MPI benchmark suites have been developed to assess the performance of MPI implementations. Unfortunately, the outcome of these benchmarks is often neither reproducible nor statistically sound. To overcome these issues, we show which experimental factors have an impact on the run-time of blocking collective MPI operations and how to measure their effect. We present a new experimental method that allows us to obtain reproducible and statistically sound measurements of MPI functions. However, to obtain reproducible measurements, it is a common approach to synchronize all processes before executing an MPI collective operation. Thus, we take a closer look at two commonly used process synchronization schemes: (1) relying on MPI_Barrier or (2) applying a window-based scheme using a common global time. We analyze both schemes experimentally and show the strengths and weaknesses of each approach. Last, we propose an automatic way to check whether MPI libraries respect self-consistent performance guidelines. In this talk, we take a closer look at the PGMPI framework, which can benchmark MPI functions and detect violations of performance guidelines. ________________________________________________________________________________________

Robots learning like a child

Lecturer:
Univ.-Prof. Dr. Justus Piater
Head of research group at IIS group at IIS Group, University of Innsbruck

Date: Thursday, 2nd of June, 12:00 – 1:00

Venue: HSB 7, Hörsaaltrakt-BI-Gebäude, Technikerstraße 13b, EG, 6020 Innsbruck

Abstract:

General-purpose autonomous robots for deployment in unstructured domains such as service and households require a high level of understanding of their environment. For example, they need to understand how to handle objects, how to operate devices, the function of objects and their
important parts, etc. How can such understanding be made available to robots? Hard-coding is not feasible, and conventional machine learning approaches will not work in such high-dimensional, continuous perception-action spaces and realistic amounts of training data. One way to get robots to learn higher-level concepts may be to focus on simple learning problems first, and then learn harder problems in ways that make use of simpler problems already-learned. For example, learning
problems can be stacked by making the output of lower-level learners available as input to higher-level learning problems, effectively turning hard problems into easier problems by expressing them in terms of highly-predictive attributes. This talk discusses how this can be done, including further boosting learning efficiency by active learning, and automatic, unsupervised structuring of sets of learning problems and their interconnections. Following a stacked learning approach, we discuss how symbolic planning operators can be formed in the continuous sensorimotor space of a manipulator robot that explores its world, and how the acquired symbolic knowledge can be further used in developing higher-level reasoning skills.
________________________________________________________________________________________

Risk Assessment for Socio-Technical Systems

Lecturer:
Christian W Probst
Associate professor at Department of Applied Mathematics and Computer Science, Technical University of Denmark

Date: Thursday, 19th of May, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Attacks on systems and organisations increasingly exploit human actors, for example through social engineering. This non-technical aspect of attacks complicates their formal treatment and automatic identification. Formalisation of human behaviour is difficult at best, and attacks on socio-technical systems are still mostly identified through brain-storming of experts. In this talk we will present some results of the TREsPASS project for risk assessment of socio-technical systems. Based on an analysis of the system under scrutiny we identify all possible attacks in the system and measure their potential impact, likelihood of success, and cost. Together, these factors provide us with the means to assess the risk faced by the system, and to identify relevant counter measures.
________________________________________________________________________________________

Predicting Soft Tissue Deformations Using Patient-Specific Meshless Model for Whole-Body CT Image Registration

Lecturer:
Mao Li
postdoctoral researcher at IGS group, University of Innsbruck

Date: Thursday, 12th of May, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Non-rigid registration algorithms that align source and target images play an important role in image-guided surgery and diagnosis. For problems involving large differences between images, such as registration of whole-body radiographic images, biomechanical models have been proposed in recent years. Biomechanical registration has been dominated by Finite Element Method (FEM). In practice, major drawback of FEM is a long time required to generate patient-specific finite element meshes and divide (segment) the image into non-overlapping constituents with different material properties. We eliminate time-consuming mesh generation through application of Meshless Total Lagrangian Explicit Dynamics (MTLED) algorithm that utilises a computational grid in a form of cloud of points. To eliminate the need for segmentation, we use fuzzy tissue classification algorithm to assign the material properties to meshless grid. Comparison of the organ contours in the registered (i.e. source image warped using deformations predicted by our patient-specific meshless model) and target images indicate that our meshless approach facilitates accurate registration of whole-body images with local misalignments of up to only two voxels
________________________________________________________________________________________

Fixation Patterns During Process Model Creation: Initial Steps Toward Neuro-adaptive Process Modeling Environments

Lecturer:
Mag. Manuel Neurauter
Research assistant at QE group, University of Innsbruck

Date: Thursday, 28th of April, 12:00 – 1:00

Venue: SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Despite their wide adoption in practice, industrial business process models often display a wide range of quality issues. While significant efforts have been undertaken to better understand factors impacting process model comprehension, only few studies focused on the creation of process models. To better support users during task execution and to reduce cognitive overload, neuro-adaptive information systems provide promising perspectives. This paper presents a set of fixation patterns that have been derived from a modeling session with 120 participants during which the behavior of the modelers was recorded in terms of eye movements as well as model interactions. The identified patterns can be used for automatic real-time detection of the activities a user is performing, an essential building block for the development of a neuro-adaptive environment for process modeling that is able to best fit the task at hand and the user’s individual processing capacities.
________________________________________________________________________________________

Three-Way Replication Industry Standard – High Storage Cost in a Bandwidth Limited Regime?

Lecturer:
Nishant Saurabh
Research assistant at DPS group, University of Innsbruck

Date: Thursday, 21th of April, 12:00 – 1:00

Venue: HSB 7, Hörsaaltrakt-BI-Gebäude, Technikerstraße 13b, EG, 6020 Innsbruck

Abstract:
Three way Replication has been widely adopted in large scale distributed storage systems to enhance fault-tolerance. To this end, a major storage cost overhead incurs as a result of maintaining three replicas, typically each in size of Gigabytes and even beyond. Furthermore, the data extraction hits a major road block in this limited bandwidth regime resulting to application overheads uniquely defined for each storage resource. In this work, we identify Virtual Machine Image(VMI) as a resource to be stored and its application overheads in terms of VMI distribution to the Cloud provider. Henceforth, as an alternative to replication, we focus on erasure coding technique initially been used for secured information dispersal, to achieve similar availability and scalabilty at a lowered storage cost. We also propose a decentralized erasure coded VMI storage repository architecture as a middleware system with a view to improve pre-mentioned overheads and provide services to the Federated Cloud models negating the issue of provider lock-in for rapid VM Provisioning.
________________________________________________________________________________________

Detection of Copy-Move Forgeries in Scanned Text Documents

Lecturer:
Svetlana Abramova
Research assistant at SEC group, University of Innsbruck

Date: Thursday, 14th of April, 12:00 – 1:00

Venue: 3W04, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
A copy-move image forgery refers to copying a portion of an image and re-inserting it (or its filtered version) elsewhere in the same image, with the intent of hiding undesirable contents or duplicating particular objects of interest. The detection of such forgeries has been studied extensively, however all known methods were designed and evaluated for digital images depicting natural scenes. In this talk, I will address the problem of detecting and localizing copy–move forgeries in images of scanned text documents. The purpose of the analysis is to study how block-based detection of near-duplicates performs in this application scenario considering that even authentic scanned text contains multiple, similar-looking glyphs (letters, numbers, and punctuation marks). I will present the results of a series of experiments on scanned documents, carried out to examine the operation of some feature representations with respect to the correct detection of copied image segments and the minimization of false positives. The findings indicate that, subject to specific threshold and parameter values, the block-based methods show modest performance in detecting copy–move forgery from scanned documents. I will present strategies to further adapt block-based copy–move forgery detection approaches to this relevant application scenario.
________________________________________________________________________________________

Region-based Software Auto-tuning

Lecturer:
Dr. Juan Durillo
Postdoctoral researcher at DPS group, University of Innsbruck

Date: Thursday, 7th of April, 12:00 – 1:00

Venue:SR 1/2, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Software auto-tuning is the process of automatically tuning the code of an application. From initial approaches aimed to reduce the execution time of programs to more sophisticated techniques aimed to optimize several criteria simultaneously, the last decade has witnessed an ever growing interest in this field. The success of auto-tuners relies on two basic properties: (1) efficient exploration of different ways to execute an application; and (2) portability, as auto-tuners can be easily run on any hardware architecture.
Despite their popularity, a wider adoption of auto-tuners to optimize real-world applications is far from being a reality. On the one hand, for real-world scientific applications, the number of possibilities to execute them explodes as a consequence of an ever-increasing number of tunable opportunities offered at both, the software and hardware, levels. On the other hand, most of the current auto-tuners have been proved successful only for specific classes of applications and when applied to small applications composed of a few lines of code, while small-to-medium real applications consist of at least few thousand lines of code.
Our goal is to advance the current state of the art in software auto-tuning, aiming for a wider adoption of auto-tuning methods to optimize real-world applications. In our model, applications are partitioned into different regions, which are the units of tuning. This way, regions with different characteristics will be optimized in different ways: CPU intensive operations, for example, can be performed by setting the CPU at the highest clock frequency to reduce the application time; conversely, memory access operations can be performed at low frequency reducing energy consumption with minimal impact on performance. We aim to face three major challenges related to tuning applications using a region-based approach: (1) how to partition into different regions; (2) how to select these regions out of a program which are worth the tuning effort and which ones should not be tuned at all or beyond a given level; and (3) how to effectively and efficiently tune complex regions to optimize several criteria.
________________________________________________________________________________________

Automated Complexity Analysis of Programs

Lecturer:
Michael Schaper
Research assistant at CL group, University of Innsbruck

Date: Thursday, 17th of March, 12:00 – 1:00

Venue:  HSB 7, Technikerstraße 13b, 6020 Innsbruck

Abstract:
Automatically checking programs for correctness has attracted the attention of the computer science research community since the birth of the discipline. In this talk we present an abstract combination framework for the automated complexity analysis of programs and its implementation in the Tyrolean Complexity Tool (TcT).
________________________________________________________________________________________

The Symbiosis Relationship between Computer Vision, Machine Learning and Neuroscience

Lecturer:
Dr. Antonio Rodríguez-Sánchez
Assistant Professor at the IIS group, University of Innsbruck

Date: Thursday, 10th of March, 12:00 – 1:00

Venue:SR 1, ICT Building, Technikerstraße 21a, 6020 Innsbruck

Abstract:
Computer Vision and Machine Learning benefit from the latest knowledge in Neuroscience to build systems that closely resemble human efficiency in those areas. On the other hand, Neuroscience benefits from Computer Vision and Machine Learning to test hypothesis on how the connections on the brain (more specifically the visual cortex) work. I will present here two approaches concerning how Computer Vision benefits from Neuroscience and how Machine Learning provides hypothesis on how to obtain neural populations resembling those in the visual cortex. For the former, I will present a 3D descriptor that is inspired by recent findings from neurophysiology. The descriptor incorporates surface curvatures and distributions of local surface point projections that represent flatness, concavity and convexity in a 3D object-centered and view-dependent representation. For the latter, I will talk on how utilizing diversity priors can discover early visual features that resemble their biological counterparts. The study is mainly motivated by the sparsity and selectivity of activations of visual neurons in area V1. A diversity prior is introduced in this work for training Restricted Boltzmann Machines (RBMs). We find that the diversity prior indeed can assure simultaneously sparsity and selectivity of neuron activations.
________________________________________________________________________________________

Universität Innsbruck