Doctoral Schools WUT

Search Engine for Promoters and Research Areas

Wykaz obszarów badawczych związanych z tagiem Large-Language-Models:

# Obszar badawczy Dziedzina naukowa
1 Use of information technology for proactive detection (identification and exposure) of disinformation and misinformation content - automated methods for flagging disinformation content, tracking propagation channels, detecting original sources of spreading false information. I am primarily interested in projects enabling the combination of engineering sciences with social sciences in preventing and combating disinformation, e.g. by applying methods of Generative Adversarial Networks and Large Language Models, in conjunction with psychosocial analysis of the problem. It is important for me to focus on issues related to the impact of disinformation on crisis situations, amplification of public sentiment, manipulation of public opinion (public health crises, natural disasters, warfare using asymmetric and hybrid methods).

A prevalent theme present in the contemporary representation learning approaches is to pre-train large foundation models on huge datasets. Such approaches utilize static datasets constructed at a particular point in time, which contrasts with the constantly changing and expanding nature of data available on the internet. The proposed research will explore a new paradigm where the training dataset is constructed on the fly by querying the internet, enabling efficient adaptation of representation learning models to selected target tasks. The aims of this research project include 1) design methods to query relevant training data and use it to adapt the representation learning model in a continuous manner, 2) make progress towards building self-supervised methods that given a description of a task, autonomously formulate their learning curricula, query the internet for relevant training data, and use it to iteratively optimize the model.


Abstract Visual Reasoning (AVR) comprises problems that resemble those appearing in human IQ tests. For example, Raven's Progressive Matrices present a set of images arranged in a 3x3 grid with a missing panel in the bottom-right corner. The test-taker has to discover relations governing 2D shapes (and their attributes) located in the images to select an answer, from a provided set of options, that best completes the matrix. In general, AVR tasks focus on fundamental cognitive abilities such as analogy-making, conceptual abstraction, or extrapolation, which makes advancements delivered by this research applicable to diverse areas, extending well beyond the investigated tasks. In this research we plan to verify the abilities of Large Language Models (LLMs) and Large Vision Models (LVMs) to solve AVR tasks, both synthetic and representing real-world images.