News
1st European Experimental Philosophy Conference 2021
Charles University, Prague, June 17-19, 2021 (online)
Experimental philosophy groups active all around Europe have agreed to join forces and establish a model of one big annual European experimental philosophy event. Last year we had to go online and move the first official event to 2021. The Faculty of Science of Charles University, the Institute of Philosophy, the Institute of State and Law and the Institute of Computer Science of the Czech Academy of Sciences (the founding institutions of the Karel Čapek Center for Values in Science and Technology), and the University of Zurich are proud to host the 1st Xphi Europe in June 2021.
The conference was originally planned to be a hybrid event taking place from 17th till 19th June, 2021 at the Charles University, Prague, Czechia, and online. Given the fact that the Covid-19 situation is very serious at the moment (especially in the Czech Republic) and that we are not able to predict whether it will be possible to host any event in June at the Faculty of Science, Charles University (which is now completely closed for students), the organizers have decided to move the conference completely online. Thus, no physical presence at the conference will be possible and a purely online version of the conference will take place at the originally planned dates, i.e. 17th-19th June 2021. Our website will be regularly updated for any news.
We believe moving the conference online is still a fantastic alternative in order to share all our research within and beyond the XPhi community and that it will be at least as successful as the Online XPhi Europe Conference we organised last year.
Confirmed speakers
Helen De Cruz, Saint Louis University
Kathryn Francis, Keele University
Michael Laakasuo, University of Helsinki
Kevin Reuter, University of Zurich
Pascale Willemsen, University of Zurich
Invited symposia
Symposium on Future Technologies
Symposium on Language and Normativity
Contacts
In case you have any questions concerning the conference XPhi Europe 2021 please contact xphieurope2021@gmail.com
In case you would like to attend the talks (and thus join the conference updates mailing list) please contact xphieurope2021@gmail.com
To address any technical issues on the web page please contact robin.kopecky@natur.cuni.cz
Web
https://xphiprague.eu/
A kitten or a grandma? There are more important issues
An Interview with Monika Mareková
Monika Mareková was born in 1987 and comes from Brestovany, Slovakia. She is a law graduate of Masaryk University in Brno, Czech Republic, and the University of Oxford. She also studied law at the University of Fribourg, Switzerland, and Laurentian University in Sudbury, Canada, and human rights at the UN Graduate Study Programme in Geneva. Her master’s thesis on conflict of the right to privacy and GPS tracking devices was recognised as the Best Human Rights Thesis of 2011 in the Czech Republic. As a student, she sued the Czech Republic for discrimination in not providing fare discounts to foreign students and achieved a change in legislation. Currently, she is an associate in the law firm CMS in Prague, a Research Fellow at the Institute of State and Law of the Czech Academy of Sciences, member of the Karel Čapek Centre for Values in Science and Technology, and a distance fellow at the Czech Centre for Human Rights and Democratization.
Both in academia and in the law firm, you deal with law focusing on new technologies. What is your relationship to them? Aren’t you paranoid because of your work?
I try not to be. Nevertheless, I am cautious. That means that I click on the cookie settings and when installing something, I always scroll through the terms and conditions at least briefly.
You deal with autonomous cars, face recognition software, artificial intelligence. How did you get into technologies?
During my studies, I dealt with human rights, through which I got into the right to privacy. That was rather a marginal topic at that time. Just look at what we wrote on Facebook ten years ago. We posted everything there. Gradually, however, people have become much more aware of what kind of data companies collect about them and are more concerned about their privacy. With the GDPR on the scene, the topic has gone mainstream. People and companies began to turn to law firms… Academic interest has thus also transformed to commercial practice.
But you still stay in academia, is that right?
Yes, I work part-time at the Academy of Sciences. I think it is beneficial to be anchored in both legal practice as well as in theory. At Oxford, some professors operate purely in the academic sphere. They just teach and write articles. This is exceptional in our country and even academics work in practice, if not in a law firm, then at a court, at a ministry…
What do you actually deal with at the Institute of State and Law?
I deal with autonomous vehicles.
That means you address such moral dilemmas of whether to go straight and kill five people or swerve and kill just one?
No. In my opinion, these hypothetical cases are overrated. Of course, some of them also deal with interesting issues. For example, whether cars will be egoistic, protecting their passengers, or rather altruistic, protecting the surroundings. At the MIT, they developed a game called Moral Machine, dealing with a similar topic. In this game, you choose whom you would knock down. A man, woman, elderly person, animals, various professions. By the way, people most often have decided to hit cats, criminals and then dogs. It is actually a cultural issue. In Western countries, people tend to protect young children and women, while in Eastern cultures they protect elderly people. Nevertheless, these are still mainly philosophical concepts. I do not believe that in the near future, sensors in cars will be able to distinguish whether the person crossing the street is twenty or sixty.
I feel that one day these issues will be resolved simply. Changing the trajectory will depend mainly on how far the object is, whether it still makes sense to slow down… Technical settings. There are many other issues to be resolved.
For example?
In order to operate, autonomous vehicles will need to communicate with the infrastructure. Although a car may behave solely according to what is captured by sensors, it also needs a great deal of data from its surroundings to make its decision-making more reliable. For example, if it sees a green light at the traffic lights, it may not be very reliable is some cases. Instead, the car will exchange information with the intersection. This type of data will be also used to enable the system to learn, to develop. In addition, cars will also communicate with each other, sharing warnings with others about dangerous situations, obstacles or icy roads. The more communication, the better the cars.
However, this means substantial interference with the privacy we still have concerning cars. Should the police want to track you now and make a record of it, they need a public prosecutor’s warrant. For autonomous vehicles, however, it will be possible for the carmaker to create a complete picture of your movements depending on your travel history. Whether you go to church or a gay club on a regular basis, or you took part in anti-government demonstrations. All of this can be misused and may affect how you will want to use such a car. People generally behave differently when they know they are being watched.
So do you think that carmakers will thus obtain information currently available to Facebook, Google?
It is more complicated than that: many actors will have access to your data. The mobile operator providing the network connection will have access to the localisation data. Data will be available to the road infrastructure manager, carmaker, service providers, for example the local traffic information service—and even the cars must share information with each other. In normal circumstances, the car driver could grant GDPR consent to the processing of such data. Nevertheless, in such case it is still necessary to know to whom exactly such consent is granted. You will not know that in these future cases.
It will be necessary to devise a regulation for handling such personal data. Similar to mobile operators, such handling is regulated by the European Directive which stipulates that operational data about phone calls are stored only for a limited period and access to such data is limited. The problem is that when it comes to autonomous vehicles, more than one player is involved.
Back to moral issues: there are concerns that autonomous vehicles will not perceive what is happening off the road. I have already mentioned the traffic lights. As a driver, I also notice when someone is approaching a crossing and become more cautious. In order for a car to be able to react so reliably in advance, everyone—at least cyclists—might need to wear some kind of communicator. Autonomous vehicle systems could then work perfectly. However, a monitored society in which we know every step would be needed. If we want to live freely, without constant supervision, it will be more difficult to come up with a solution.
What is the situation in the Czech Republic? Do we have some regulations of autonomous cars?
An Ethics Committee was to be set up at the Ministry of Transport—I was supposed to be on it—but it has not yet been appointed yet. The Czech legislation does not yet provide for autonomous cars. Therefore, they cannot be properly tested as is the case in Germany. The current systems only work to support the driver, who remains responsible. Accordingly, vehicles with built-in intelligent systems ask the driver every few seconds to move the steering wheel for example.
We have dealt with privacy, but there are many more legal dilemmas ahead of us, for example those associated with accidents. Today, you can exonerate yourself from causing a crash if the car was in a poor condition due to the manufacturer or repair provider. You do not have many other options. With autonomous vehicles? The software could have obtained the wrong data or could have mistakenly evaluated the data. The sensor or virtual navigation might malfunction. A cyber-attack could have been involved. There could be a hundred different problems like these.
Is it possible to say in general that law falls behind technological development?
Legislation can try to anticipate developments, to prepare somehow. But in general, it responds retroactively to existing situations. When a new technology emerges, we rely on the existing law. There is consumer protection, the Civil Code, the Constitution... These might be enough.
However, sometimes the law develops with a changing technology. This is exactly what we see in GPS tracking devices. Initially, such cases were based on decisions related to ordinary tracking systems. But GPS can track you 24 hours a day for several months, allowing your movement patterns to be identified… So, based on such differences, in the end it was necessary to change the law.
I address such technology issues in my commercial practice, and each case is so new that each time I actually have rethink it from the basics. It is not just another real estate transaction.
What do you think about the idea that brand new technologies should not be over-regulated, because we are slowing down the economy, and we will lag behind the US and China where the regulations are not the focus that much?
Yes, the strictness of the European Union is sometimes stated as the reason why most of the technology giants come from the U.S. and China. In my opinion, the problem is that the regulations are missing there. If we stick to the right to privacy, a breach of that right is not torture; we do not feel it for a long time in any manner whatsoever. Yet it can fatally change society and the political system. When someone has access to your private choices, they can collect them and use them to estimate how society thinks, how it develops. Then, they can attempt to influence that development. They can serve information to society in such a way that it pushes people unwittingly in the desired direction. Remember the US presidential elections and Cambridge Analytica? That is why I think that regulation is needed.
Do you have any idea about the future developments of personal data protection? What will it look like for example in ten years?
What issues are we going to address in ten years’ time? Ten years ago, it was mainly terrorism and the role of the state which justified interferences with the right to privacy by fighting against possible attacks. We were just joining Facebook at that time and knew Amazon mainly as a bookstore. The issue of these data giants is being addressed better now. Through them, however, we are back to the state, for example elections being influenced. In addition, the state is still here: coronavirus could become the “new terrorism”.
How do you mean that?
Each right to privacy is always balanced on the one hand by public safety and on the other hand by privacy itself. For example, a new measure has been introduced now that operators are to provide a memory map of an infected person for hygienists to track that person’s contacts. If you give your consent, it is fine. However, a problem could arise if we interpreted such procedure as a state of emergency and therefore we could inspect each infected person’s data. We feel acute threat, so we protect public safety. I do not want to balance it here against human lives; it would sound silly, of course, but such decisions can have a long-term impact on society as a whole. And I believe that this impact might be worse than a terrorist attack or virus.
Artificial intelligence has become a ubiquitous concept. Do we have any legal definition of artificial intelligence, what does it mean?
Today, when a computer works with other than completely basic algorithms, we talk about artificial intelligence. It has become a popular phrase. At the same time, this issue is much discussed in legal theory. There are many ideas on how to capture artificial intelligence. Some people say that we should perhaps use the concept of animal rights.
What does that mean?
“Robot rights” could hypothetically be transferred to the person who owns them. When you have a dog and someone hurts it, that person hurts you too. Or if your dog kills your neighbour’s hen, you have to indemnify your neighbour. In other words: should the robot do anything, you are responsible.
There is also the concept of technical personality, which is neither a thing nor an animal. Nevertheless, this is an issue to be addressed decades hence. We will be discussing robots who perceive their autonomy, and they will be developing…
Facial recognition technology sounded similarly futuristic. Today, however, we hear about a growing number of countries experimenting with it. It is not just about China. How is the Czech legal system prepared for this?
When facial recognition was supposed be used in Czech football stadiums, its use was banned by the Office for Personal Data Protection. According to the GDPR, such biometric data falls under a special category of personal data that can be processed only in very exceptional cases, for example due to substantial public interest based on an explicit authorisation with a basis in law and which must be proportionate to the aim pursued. The Office for Personal Data Protection did not examine whether substantial public interest could be pursued by using facial recognition technology, but in any case it concluded that there was no explicit basis in law. Let us suppose that there were a law providing for the possibility to protect football match visitors against hooligans with facial recognition technology. Then it would be necessary to assess whether a substantial public interest to protect public order is at stake, and whether such technology is proportionate and provides for measures to safeguard fundamental rights. If so, you could use this technology. When the Danish authorities dealt with this issue, they decided differently. According to them, cameras are okay as they serve to protect a substantial public interest consisting in safety, and the local authority considered the general possibility of introducing measures to protect substantial public interest as the basis in law for using cameras in stadiums.
The deployment of such cameras could still be addressed through an individual’s consent. However, such consent must be given freely, which means that there is an option to refuse. In the case of stadiums, that would mean not being able to go there, which is not a free decision. In France, they faced a similar issue, when facial recognition technology was to be used at school entrances subject to the pupils’ consent. However, the courts decided that it was not free at all if refusal meant denying access to the school.
Constitutional law will be faced with the question whether such systems can be used in public spaces. This means tracking a person’s movement around a city in general and composing a mosaic of where that person went all day and thus what they were doing. This has not yet been reflected in case law; these are all recent issues, for example the French decision was only taken in February.
We perceive artificial intelligence as a fair, neutral technology. Nevertheless, can it not sometimes be unfair to work with biases?
Many people think that this is a pseudo-problem and are upset why, for example gender is also addressed in the scope of artificial intelligence. Nevertheless, artificial intelligence has these so-called biases of distorted perception. If, for example, you notice a person with a different skin colour in the subway, you may automatically become alert. However, you will immediately realise that your reaction does not make sense. But machines lack this ability to start to behave differently. If machines learn to recognise faces mainly from photos of white people, they keep in memory that this is what a human face looks like.
Artificial intelligence does not have to be intentionally programmed in a discriminatory manner. The problem is often that the data from which machines learn is distorted. For example, one robot screened job seekers’ CVs and after a while, it turned out that it considered men to be better candidates and was eliminating women much more often. This was caused by the fact that at the beginning, the sample of exemplary good CVs included more men and the robot began to deem it as an important criterium.
However, this cannot probably be prevented by any statutory provision, is that right?
In some issues, we can rely on existing regulations. For example, companies must not have discriminatory selection procedures. In general, however, more emphasis needs to be placed on the data from which systems learn. We should thus not reproduce inequalities and discrimination in the current real environment. We are now creating new systems, a new society. Such society could get rid of past biases that make our lives difficult and begin to perceive people equally at last.
Published in Finmag magazine April / May 2000 in Czech. Publication of translation on this website is with permission of editors. The original article can be found in Czech here.
2020 SILFS Prize for Women in Logic and the Philosophy of Science
The Italian Society for Logic and the Philosophy of Science (SILFS), in order to promote and support the contribution of underrepresented groups to the fields of logic and the philosophy of science, establishes a prize called “SILFS Prize for Women in Logic and the Philosophy of Science”.
The deadline for submissions is the 31st of April 2020.
1. The Italian Society for Logic and the Philosophy of Science (SILFS), in order to promote and support the contribution of underrepresented groups to the fields of logic and the philosophy of science, establishes a prize called "SILFS Prize for Women in Logic and the Philosophy of Science".
2. The prize is awarded every two years.
3. Eligibility: all scholars who recognize themselves as women, irrespective of their role, institution, and/or nationality.
4. The prize is awarded by an Evaluating Board Committee, which is appointed by the SILFS steering committee, composed by some members of the steering committee itself as well as by other logic and philosophy of science scholars.
5. Applicants are requested to submit an original work (not previously published) written in English (Max 10.000 words, abstract and references not included) on any relevant topic of logic or the philosophy of science broadly conceived; proposals connected to topics and approaches that the community of logicians and philosophers of science would recognize as novelties with respect to its mainstream scientific production are particularly welcome.
6. The winner of the 2020 SILFS Prize for Women in Logic and the Philosophy of Science will be invited to present her work at the SILFS 2020 – the Triennial International Conference of the Italian Society for Logic and the Philosophy of Science to be held in Bologna from the 7th to the 11th of September 2020. SILFS will reimburse all travel and accommodation expenses (up to a maximum of 1.500 euros).
7. The deadline for submissions for the 2020 SILFS Prize for Women in Logic and the Philosophy of Science is the 31st of March 2020. Papers should be prepared for blind review and sent in a pdf format (Word: Times New Roman, 11pt, double spacing; LaTex: Document class article pt. 11) to the Secretary of the SILFS, Giuseppe Sergioli (giuseppe.sergioli@gmail.com).
8. The Evaluation Committee for the 2020 SILFS Prize for Women in Logic and Philosophy of Science is composed by:
Cristina Amoretti (University of Genoa)
Raffaella Campaner (University of Bologna)
Elena Casetta (University of Turin)
Agata Ciabattoni (Technische Universität Wien)
Giovanna Corsi (University of Bologna)
Luisa Damiano (University of Messina)
Mariangiola Dezani-Ciancaglini (University of Turin)
Laura Felline (Independent researcher)
Eleonora Montuschi (University of Venice, Cà Foscari)
Sara Negri (University of Genoa)
Federica Russo (University of Amsterdam)
Viola Schiaffonati (University of Milan Politecnico)
Cookies