- Home
- Institute
- People
- Research
- Applications
- Seminars and Events
- Library
- Doctoral Studies
- Jobs

Institute of Computer Science

The Czech Academy of Sciences

The Czech Academy of Sciences

"A computer lets you make more mistakes faster than any other invention with the possible exceptions of handguns and Tequila." - M. Ratcliffe

HORA INFORMATICAE (meaning: TIME FOR INFORMATICS) is a broad-spectrum scientific seminar devoted to all core areas of computer science and its interdisciplinary interfaces with other sciences and applied domains. Original contributions addressing classical and emerging topics are welcome. Founded by Jiří Wiedermann, the seminar is running since 1994 at the Institute of Computer Science of the Czech Academy of Sciences in Prague.

#### Model M - an agent-based epidemiological model

06.12.2022 14:00Plenary room (room. 318, second floor) Institute of Computer ScienceZOOM 954 7823 4977 (Passcode: 712564)During the recent pandemic, the interest in epidemiological modeling rapidly increased. Epidemiological models improve our understanding of the dynamics of disease spread and help us during the design of various protective measures. The important family of models is formed by agent-based models. They provide simulation tools for detailed modeling of individual human behaviour. We will present our network agent model called model M. It works with a population of individuals (agents) and their contacts are modeled as a multi-graph social network according to real data based on a Czech county. Custom algorithmic procedures simulating testing, quarantine and partial closures of various contact types are implemented. The model can serve as a tool for relative comparison of the efficacy of various policies. It was also used for a study comparing various interventions in Czech primary and secondary schools, using a graph based on real data from a selected Czech school.

:

#### Some implications of high-dimensional geometry for classification by neural networks

22.11.2022 14:00Plenary room (room. 318, second floor) Institute of Computer ScienceZOOM 954 7823 4977 (Passcode: 712564)Computational difficulties of multidimensional tasks, called the ``curse of dimensionality’’, have long been known. On the other hand, almost deterministic behavior of some randomized models and algorithms depending on large numbers of variables can be attributed to the ``blessing of dimensionality’’. These phenomena can be explained by rather counter-intuitive properties of geometry of high-dimensional spaces. They imply concentration of values of sufficiently smooth functions of many variables around their mean values and possibilities of reduction of dimensionality of data by random projections. In the lecture, it will be shown how these properties of high-dimensional geometry can be employed to obtain some insights into suitability of various types of neural networks for classification of large data sets. Probabilistic bounds on network complexity will be derived using concentration properties of approximation errors based on Azuma and McDiarmid inequalities. Consequences for choice of network architectures will be analyzed in terms of growth functions and VC dimensions of sets of network input-output functions. General results will be illustrated by examples of deep perceptron networks with various piecewise polynomial activation functions (ReLU, RePU).

: #### Autonomous machines and machine minds

08.11.2022 14:00Plenary room (room. 318, second floor) Institute of Computer ScienceZOOM 954 7823 4977 (Passcode: 712564)Autonomous machines, like robots and self-driving cars, are machines that perform behaviors or tasks with a high degree of autonomy (without, or with minimal external influence). Such machines operate and cooperate in the real world while coping with unpredictable external phenomena and error-prone technology. We model any autonomous machine as a cyber-physical system controlled by universal processors implementing so-called minimal machine consciousness, minimal machine experience, and minimal machine understanding. In the talk, we present in more detail the respective model, define and justify our machine versions of consciousness, experience, and understanding and indicate their computational realization. We argue that endowing the systems with all these cognitive feats leads to more trustworthy, safer, and more reliable systems, by increasing their behavioral flexibility and autonomy in varying environments and by strengthening their resilience to operation or cooperation failures of their components or as a whole. The talk is based on joint research with Jan van Leeuwen, Utrecht, NL.

: #### How to deal with data contamination in hypothesis testing

16.12.2019 14:00The theory of testing statistical hypotheses aims not only at proposing suitable tests, but also (and mainly) at deriving their optimal procedures under certain conditions. Any type of data contamination however requires non-standard approaches to testing. This talk, which is intended for non-statisticians, will very briefly overview principles of several recent results, on which I participated. To be specific, particular results include: Minimax tests under measurement errors, Locally optimal tests based on sequential ranks, Tests based on ranks of interpoint distances among high-dimensional observations, Symmetry tests based on robust multivariate estimators.

: #### Automata and Turing Computation with Synfire Ring-Based Neural Networks

04.03.2019 14:00Synfire rings are layered looping neuronal structures that allow for robust, temporally precise and self-sustained spiking activities. We show that finite state automata and Turing machines can be simulated by recurrent neural networks composed of synfire rings. While the original constructions involve simple Boolean neurons, we investigate the generalization of the results to the more biological contexts of Izhikevich and Hodgkin-Huxley neural nets. These considerations show that a neuro-inspired paradigm for abstract computation is possible and potentially exploitable. They might constitute a very first step towards the realization of neuronal computers.

: #### Probabilistic Bounds on Complexity of Networks Classifying Large Data Sets

18.11.2019 14:00Room 318 @ Institute of Computer ScienceThe number of binary classification tasks on a finite domain (representing a set of vectors of features, measurements, or observations) grows exponentially with its size. However for a given application area, relevance of many such tasks might be low or negligible. The lecture will investigate a probabilistic framework modeling a prior knowledge about probabilities that a presence of some features implies a property described by one of the classes. Impact of increasing sizes of domains on correlations between input-output mappings of computational models and randomly-chosen classifiers will be analyzed. It will be shown that on large domains, correlations between tasks and computational units are sharply concentrated around their mean values. Probabilistic bounds on correlations will be derived using various concentration inequalities, some of them holding also without the “naive Bayes assumption”. It will be shown that performance of random classifiers is almost deterministic, in the sense that either a given class of computational models can approximate well almost all tasks or none of them. Consequences for the choice of optimal computational models will be discussed.

: #### Interpolant generation algorithms based on quantifier elimination

01.07.2019 14:00In their paper in 2006, Kapur, Majumdar and Zarba observed a connection between quantifier elimination and interpolant generation which was probably well-known but not explicitly reported in the literature on automated reasoning and formal methods. Since then I have been investigating how to develop heuristics for quantifier elimination to generate interpolants. Particularly, there is no need to have access to a proof to generate interpolants, a methodology widely used in the formal methods community. I will start with an interpolant generation algorithm in the quantifier-free theory of equality over uninterpreted symbols (EUF). This is followed by an interpolant generation algorithm for octagonal formulas, which is of complexity O(n^3), where n is the number of variables; an interpolant generated is a conjunction of octagonal formulas. Then, I will move to nonlinear (quadratic) polynomial inequalities and their combination with EUF. The key idea is that nonlinear concave quadratic polynomials can be linearized and a generalization of Motzkin's transposition theorem can be proved. Time permitting, I will discuss our recent work on using linear and nonlinear deep learning methods based on support vector machines, exploring a totally different approach for generating nonlinear interpolants from more complex formulas including transendental functions.

: #### Automata and Turing Computation with Synfire Ring-Based Neural Networks

04.03.2019 14:00Synfire rings are layered looping neuronal structures that allow for robust, temporally precise and self-sustained spiking activities. We show that finite state automata and Turing machines can be simulated by recurrent neural networks composed of synfire rings. While the original constructions involve simple Boolean neurons, we investigate the generalization of the results to the more biological contexts of Izhikevich and Hodgkin-Huxley neural nets. These considerations show that a neuro-inspired paradigm for abstract computation is possible and potentially exploitable. They might constitute a very first step towards the realization of neuronal computers.

: #### Deep Neural Networks in Natural Language Processing

14.01.2019 14:00Similarly to other fields, Natural language processing has recently been revolutionized by the introduction of Deep learning. In my talk, I will focus on two specific properties of natural text as input to a machine learning system, and the current ways in which these are addressed in Deep neural networks:

- Representing massively multi-valued discrete data (words) by continuous low-dimensional vectors (word embeddings).
- Handling variable-length input sequences with long-distance relations between elements (sentences) by fixed-sized neural units (attention mechanisms).

: