Skip to main content
– Interview

Security through transparency: Why open source is ahead - Interview with Klaas Freitag, CTO OpenCloud

Klaas Freitag is the CTO at OpenCloud and is responsible for the technical development of our open source file management solution. He is an open source developer by conviction. During his professional career, Klaas has been active on the board of the KDE Trägerverein, worked for many years as a team leader at the well-known Linux distributor SUSE and has been involved with the ownCloud server from day one. Since then, he has not let go of the "private cloud" and is now actively driving the technical development of OpenCloud, our solution for file sharing & content collaboration.

Security through transparency

You have been known as an open source expert for more than 30 years. Why would you never store your tax return or family photos with Microsoft and co.

I don't want to hand over any personal data, not even "just" the sensitive data, but none at all. All too often, data is collected in order to make money from it: as a basis for AI training or for advertising and analyses. And once it's out there, it can't be reliably retrieved.

I don't want to risk a government knowing more about me than necessary, especially when social and political values are shifting.

Private communication is not a crime, but a civil right. Open source software helps to keep data independent and controllable.

 

In theory, openness increases the attack surface. How do you ensure that openness actually leads to more security and not to new threats?

Openess can seem like a risk at first glance: Attackers can also analyse code. But vulnerabilities exist in all software, both open and closed. The key is therefore not to hide them, but to find and fix them. And an open culture is better suited to this: Many independent people can assess the software and point out errors so that they can be fixed quickly.

If it is open how software is developed and a wide variety of people can contribute to improving the process, this also ensures greater quality.

Examples of this that are standard in open source development today: Code reviews, elaborate CI/CD pipelines, automatic builds and tests. Of course, this also exists in closed development, but often in a "closed room" in which the reviewers are known and completely new perspectives are rarely added.

Attacks such as the infiltration of malicious code are often discovered quickly. Proprietary software lacks this independent verifiability, you simply know less.

 

The "Coordinated Vulnerability Disclosure" process sounds good on paper. What is it in practice?

The process puts the security of users at the centre. If a vulnerability is found, the finder usually informs the "manufacturer" confidentially and sets a deadline for the manufacturer to develop and provide a fix. It is usually agreed that the problem will not be made public until the update is available, so that users are protected.

Sometimes a problem affects not just one manufacturer, but several companies that maintain a component together. These do not necessarily have to be "friendly companies". Then it gets more complicated, because even then we have to act together.

Even in such cases, we agree on procedures and deadlines, for example when a public disclosure will be made.

This is how it works in practice. It's a promise we make to our customers: We do everything we can to protect them from damage caused by such problems.

 

You also share information about security vulnerabilities with competitors in the open source sector, can that not also harbour risks? How do you minimise the risk?

"Coordinated disclosure" is a long-established, albeit often non-formal, agreement that is intended to minimise precisely such risks. The idea is that all affected parties are informed discreetly, fixes can be prepared and then published in a coordinated manner at a specific time. The aim is to handle the problem in the safest possible way for users.

A maintainer who avoids this procedure shows that they are not a reliable (business) partner in the open source ecosystem. These things get around quickly and stick in people's minds.

In my sphere of influence, I would always ensure that such agreements are made unequivocally and meticulously adhered to - because in addition to the seriousness of the company and the security of the users, it is also about the reliability of the open source scene as a whole.

 

Why are security vulnerabilities published at all?

They are public anyway due to the open source character, so they could not be swept under the carpet. But much more than that, the procedure promotes transparency, which increases the trust of all users in a project and the way it is developed.

 

How do you deal with the criticism that FOSS security processes could be too fragmented and inconsistent if different projects rely on the same components?

In view of the coordinated procedure described above, I would find it difficult to understand such an accusation. In addition, there are higher-level bodies that assign unique numbers to security issues, for example, so that they are easier for everyone to understand and reference.

 

If open source is so secure, how can it be that there was recently a massive security incident with the OSS lib XZ?

This was an interesting case that shook not only the open source scene, but the entire IT world. The mechanisms used to carry out the attack were based on human interaction. This can also happen in closed environments.

It just goes to show that the topic of security cannot be taken seriously enough and that attacks are always possible, especially in "social engineering". Vigilance, clear processes and a consistent multiple-eye principle are needed here.

And you also have to ask: How would a closed-source provider have dealt with such an incident? Would we even have found out about it and would we as an industry have had the chance to learn from it?

 

Do you believe that the security advantage of open source software will always remain in the long term, or can closed source software catch up using modern methods such as AI security analysis?

I am actually convinced that in the closed source world, too, the issue is handled very responsibly in most cases and that technical tools are also used accordingly. But it's very difficult for us to check.

Often people also criticise the fact that open source is not comprehensible to everyone. That's true, but it is possible to have the code checked by independent experts. And this happens regularly. With closed source, this independent review is often limited or not possible at all.

With open source, you can verify security in case of doubt, whereas with closed source you have to believe it. And AI will not be able to overcome this subtle difference.