I sure nobody still invented better than OAuth2 over HTTPS. It is absolutely simple and it really works
-------------------
OAuth 2 is a protocol.
It is based on keys and passwords, on ordinary cryptography.
Everything would be that good, if not for the attacks, not for the theft of password information, phishing.
Look, some points of this protocol, everything is trivial.
1. Customer ID and customer secret
After registering the application, the service will create client credentials - client identifier (client ID) and client secret (client secret). The client identifier is a publicly available string that is used by the service API to identify the application, and is also used to create authorization URLs for users. The client’s secret is used to authenticate the application’s authenticity for the service’s API when the application requests access to the user's account. The secret of the client should be known only to the application and API.
What's good". Your secret is your problem.
2. The user authorizes the application.
When a user clicks on a link, he must first log in to confirm his identity (unless, of course, he is logged in yet). After that, the service will prompt the user to authorize or refuse.
Again danger.
3. Type of authorization permission: Implicit.
The implicit type of authorization permission is used by mobile and web applications (applications that run in a web browser), where the confidentiality of the client’s secret cannot be guaranteed. The implicit permission type is also based on user agent redirection, and the access token is passed to the user agent for further transfer to the application. This, in turn, makes the token available to the user and other applications on the user's device. Also, with this type of authorization permission, the application is not authenticated, and the process itself relies on the redirect URL (previously registered in the service).
The implicit type of authorization permission does not support refresh tokens.
What is reliable here? Applications that just downloaded?
4. Type of authorization permission: credentials of the resource owner.
With this type of authorization permission, the user provides the application directly with their authorization data in the service (username and password). The application, in turn, uses the received user credentials to obtain an access token from the service. This type of authorization permission should be used only when other options are not available. In addition, this type of permission should be used only if the application is trusted by the user (for example, it is part of the service itself, or the user's operating system).
What a twist! I have to understand the applications that I installed myself! Yes, this is the usual system of trust: "I believe" - "I do not believe it."
What did you find special and reliable in OAuth2 over HTTPS?
Can we talk about cryptography on elliptic curves, the most reliable in the world, on which the entire blockchain is supported and the crowd of believers believes this?
I am amazed no one has mentioned there microsoft cause it's one of the early adopter among huge companies. Passwordless authentication is good at some point cause makes it's more harder to get victim of hackers or phishing and etc thanks to Multi Factor Authentication. I think if you are interested in it, you must read what's written on this page of Microsoft and also watch videos, link here:
https://www.microsoft.com/en-us/security/technology/identity-access-management/passwordlessI agree with OP, we really need something like that and I am amazed why some companies haven't even think about that, especially Ledger and etc which aim security of crypto wallets.
-------------------------
!
Thank you very much for the thematic link. I will try to deal with the material. I can’t understand the video, because I don’t speak English, to my shame.
In response, for my part, I want to share interesting analytical material that I found on the Internet and edited.
I do not want to escalate the fear of those present here, but you need to know this if you study the issue of security - for real.
This material reasonably answers important 2 questions:
1. Is cryptography on elliptic curves so safe as we think?
2. Are quantum computations really dangerous for
modern public key cryptosystems?
In higher circles, official organizations, whose activities are directly related to cryptography, since 2015, there is a lively activity.
Why everything so suddenly turned up so hard, no one explains to us.
They probably know more than they say. Yes, and hide the ends ...
The competent organizations involved in setting universal technical standards are very noticeably concerned about the problems of the so-called quantum-safe cryptography. Here are the facts that you should pay attention to, even to us, non-specialists in the field of cryptography.
The next international symposium entitled “ETSI / IQC Workshop on Quantum Secure Cryptography” (
https://www.etsi.org/events/1072-ws-on-quantumsafe was held on September 19-21, 2016 in Toronto, Canada, 2016). To emphasize the significance of this event, it should be clarified that ETSI is the European Telecommunications Standards Institute (that is, the industry equivalent of the American NIST, the main standardization body in the United States). And IQC, respectively, is the Institute of Quantum Computing at the University of Waterloo, that is, one of the world's leading research centers that have been dealing with cryptography problems in the context of quantum computers for more than a dozen years.
With such solid organizers of the event, not only leading scientists of academic structures and industry, but also important people from the leadership of transnational corporations and government departments of Europe, North America, Japan, China and South Korea were noted among the participants of the symposium.
And besides, there are also big chiefs of special services involved in the protection of information in states such as Britain, Canada and Germany.
And all these very busy people gathered in Toronto, back in 2016, to discuss how to strengthen cryptography to withstand technologies that, even according to the most optimistic estimates, will become a real threat in twenty years, at least.
If we take into account the fact that, almost simultaneously, in August 2016, NIST (USA) officially announced the launch of its own large-scale program for the transition from traditional cryptography to “post-quantum” cryptography, then the conclusion will be quite obvious.
In the world of cryptography, big changes have already clearly begun. And they started up somehow very hastily and even with some signs of panic. Which, of course, raises questions. And that's why.
In the United States, the first official signal that an urgent need to do something with the modernization of traditional cryptography was August 2015. It was then that the National Security Agency, as the main authority of the state in the field of ciphers, issued a statement on significant changes in its basic policy, in connection with the need to develop new standards for post-quantum cryptography, or, briefly, PQC (National Security Agency, Cryptography today, August 2015 )
The parties involved in this process, and the NSA itself, stated that it considers the present moment (this is still 2015-2016) the most suitable time to come to grips with the development of new protocols for public-key cryptography. Such cryptography, where the strength of the cipher will not depend on calculations using quantum computers.
Naturally, the idea comes that someone somewhere, secretly from the rest, still built a real quantum computer, back in those days. And since the most visible and decisive initiative for the early transition to a new, quantum-safe cryptography was demonstrated by the NSA, it is easy to guess which state comes to mind in the first place. Having not only the largest budget for such initiatives, but also all the necessary scientific and technical capabilities. The NSA, an organization highly classified and secretly able to use the most powerful supercomputers on the planet.
In an open community of cryptographers, puzzled by the haste of new initiatives, there are naturally a lot of other various speculations to explain what is happening. The most informative, perhaps a review work, summarizing and comparing all such hypotheses and assumptions without a final answer, can be considered the well-known article “Puzzle wrapped in a riddle”, prepared by the very famous cryptographers Neil Koblitz and Alfred Menezes at the end of 2015 (Neal Koblitz and Alfred J . Menezes, “A Riddle Wrapped in an Enigma”).
In order to make it clearer why it makes sense to focus on the facts precisely from this analytical work, two points should be briefly clarified.
First: what place do its authors occupy in open academic cryptography.
Second: how closely their own scientific developments are intertwined with the NSA's initiatives to accelerate the transfer of used cryptographic algorithms to other tracks.
The American mathematician and cryptographer Neil Koblitz, is (along with Victor Miller) one of those two people who in 1985 simultaneously and independently came up with a new public key crypto scheme, called ECC (this is, we recall, an abbreviation for Elliptic Curve Cryptography , that is, "cryptography on elliptic curves").
Without going deep into the technical details of this method and its difference from the RSA cryptographic scheme that appeared earlier, we note that ECC has obvious advantages from the point of view of practical operation, since the same theoretical stability of the algorithm is provided with a much shorter key length (for comparison: 256-bit ECC operations are equivalent to working with a 3072-bit module in RSA). And this greatly simplifies the calculations and significantly improves the system performance.
The second important point (almost certainly related to the first) is that the extremely secretive NSA in its cryptographic preferences from the very beginning began to lean in favor of ECC. (!)
In the early years and decades, this reached the academic and industrial circles only in an implicit form (when, for example, in 1997, an official of the NSA, Jerry Solinas, first spoke at the Crypto public conference - with a report on their modification of the famous Koblitz scheme).
Well then, it was already documented. In 2005, the NSA published its recommendations on cryptographic algorithms in the form of the so-called Suite B (“Set B”) - a set of openly published ciphers for hiding secret and top-secret information in national communication systems.
All the basic components of this document were built on the basis of ECC, and for RSA, the auxiliary role of the “first generation” (!) Was assigned, necessary only for a smooth transition to a new, more efficient cryptography on elliptic curves ... (!)
Now we need to remember about Alfred Menezes, the second co-author of the article about "Puzzle, shrouded in a riddle." Canadian mathematician and cryptographer Menezes has been working at the University of Waterloo, one of the most famous centers of open academic cryptography, all his scientific life since the mid-1980s. It was here that in the 1980s, three university professors created Certicom, a company dedicated to the development and commercial promotion of cryptography on elliptic curves.
Accordingly, Alfred Menezes eventually became not only a prominent Certicom developer and author of several authoritative books on ECC crypto schemes, but also a co-author of several important patents describing ECC. Well, the NSA, in turn, when it launched its entire project called Suite B, previously purchased from Certicom a large (twenty-odd) package of patents covering “elliptical” cryptography.
This whole preamble was needed in order to explain why Koblitz and Menezes are precisely those people who, for natural reasons, considered themselves knowledgeable about the current affairs and plans of the NSA in the field of cryptographic information protection.
However, for them, the NSA initiative with a sharp change of course to post-quantum algorithms was a complete surprise. (!)
Back in the summer of 2015 (!) The NSA “quietly”, without explaining to anyone at all, removed the “P-256” ECC algorithm from its kit, while leaving it with its RSA equivalent with a 3072-bit module. Moreover, in the NSA's accompanying statements it was quite clearly said that all parties implementing the algorithms from Suite B now no longer make any sense to switch to ECC, but it is better to simply increase the RSA key lengths and wait until new post-quantum ciphers appear ...
But why? What is the reason for such a sharp rollback to the old RSA system? I do not think that such a serious organization will make such serious decisions, for no reason.
Koblitz and Menezes have every reason to consider themselves people competent in the field of cryptography on elliptic curves, but they did not hear absolutely anything about new hacking methods that compromised “their” crypto scheme. So everything that happens around ECC amazed mathematicians extremely.
People who have close contacts with this industry know that large corporations that provide cryptographic tasks and equipment for the US government always get some kind of advance warning about changing plans. But in this case there was nothing of the kind.
Even more unexpected was the fact that no one from the NSA addressed the people from NIST (USA), who are responsible for the open cryptographic standards of the state.
And finally, even the NSA’s own cryptographic mathematicians from the Information Security Administration (IAD) were extremely surprised by the surprise that the leadership presented them with their post-quantum initiative ...
It can be concluded that those very influential people who in the bowels of the NSA initiated a public change of course did this without any feedback and consultation, even with their own experts. It is to this conclusion that Koblitz and Menezes come in their analyzes. And they readily admit that in the end no one really understands the technical background of everything that happens here.
The conclusion suggests itself that there was some unknown activity, some hidden actors.
For an adequate perception of intrigue, it is very desirable to know that in fact the principles of public key cryptography were discovered almost simultaneously (in the 1970s) in two fundamentally different places at once. At first, a few years earlier, this was done by three secret cryptographs within the walls of the British secret service GCHQ, an analogue and the closest partner of the American NSA. But as it has long been wound up, everything was done in deep secrecy and "only for yourself."
The discovery was not made by GCHQ full-time employees, but by the mathematicians of the CESG unit, responsible for national ciphers and the protection of government communications systems in the UK. And the close interaction between the GCHQ and the NSA of the USA takes place primarily along the lines of joint intelligence activities. In other words, since the NSA also has its own IAD (Information Assurance Directorate) department, specializing in the development of cryptographic algorithms and information protection, the discovery of British colleagues was a complete surprise for the mathematicians of this unit. And for the first time they learned about it from their fellow spies who closely interact with the British ...
And when the same algorithms, in fact, based on factorization and discrete logarithms, regardless of the special services, were soon invented in the USA by open community researchers (Diffie, Hellman, Merkle, Raivest, Shamir, Adleman), the NSA made a huge effort to cram this genie back to the bottle.
Without revealing that the special service already has this math, the NSA chiefs simply tried in every possible way to prevent scientists from publishing this information widely. National security advocates have been pushing that strong cryptography is too serious a weapon, and their new public key encryption algorithms allow anyone, even people and parties who have never met each other, to be hidden from control.
As everyone knows, absolutely nothing with a ban on knowledge and gagging scientists at the NSA did not work. As a result, the open scientific community was very angry with the NSA. And besides, under the pressure of scientists and industry, it was not the spy intelligence service, but the civilian structure, NIST, USA, that began to lead the development and implementation of commercial cryptography in the country.
And although this story is very old, it is quite clearly repeated. Unless, of course, watch carefully.
The ETSI / IQC International Symposium on Quantum Secure Cryptography (in 2016), from which this story began, has several notable features.
Firstly, it was very solidly represented by the heads of important structures, special services of Great Britain, Canada, Germany. All these national special services are analogues of the American NSA. However, absolutely no one was mentioned explicitly from the NSA. And this, of course, is not an accident.
There is plenty of evidence, both from business leaders and directly from the heads of intelligence agencies, that after revelations from Edward Snowden, almost the entire US IT industry (not to mention other countries) reacts extremely negatively to NSA activities. In other words, at international forums discussing ways to strengthen cryptography in the light of new threats, it is now prudent for the NSA to simply not shine.
Another notable feature of what is happening is that this “workshop” in Toronto is not the first, but the fourth in a row. The first was in 2013 in Paris, and the second - especially interesting for us - took place in the fall of 2014 in the capital of Canada, Ottawa.
This event is interesting for the reason that there was a highly unusual report on behalf of the secret British secret service GCHQ (P. Campbell, M. Groves, D. Shepherd, "Soliloquy: A Cautionary Tale"). This is a report from the CESG information security division, which was personally made by Michael Groves, who leads cryptographic research at this intelligence agency.
It must be emphasized here that it is completely uncharacteristic for people from the British special services to talk about their secret developments at open conferences. However, this case was truly exceptional.
In his report, Groves not only said that British cryptographers have been developing quantum-safe algorithms for a long time, since the beginning of the 2000s.
At the same time, it is important that the decision to completely refuse (and not to strengthen-modernize the old design) was mainly made by the special services, due to a very powerful and very impressive attack by the British, developed back in 2013 (!) By a group of researchers from the open academic community . In the work of these authors: K. Eisentraeger, S. Hallgren, A. Kitaev, and F. Song. "A quantum algorithm for computing the unit group of an arbitrary degree number field." In STOC ACM, 2014, an essentially new quantum attack of a very general type is described, covering, in particular, a wide range of "post-quantum" crypto circuits, including Soliloquy, unknown to anyone at that time ...
The effect of this “half-open” speech by a large cryptographer of the British secret service turned out to be exactly as it was obviously intended. The information security industry and academy readily accepted CESG people as very knowledgeable consultants (who clearly demonstrated not only their “leading” competence, but also their willingness to share even their failure experience). At a forum in Toronto, the two CESG bosses were even entrusted with chairing sessions and moderating discussions. (!)
A completely different effect immediately manifested itself, usually accompanying any cooperation with special services. This refers to all excess of secrecy, attempts to drown out even the already published research results.
The story about the CESG grand cryptographer's performance at the open symposium was extremely sparingly covered in the media, and the article and presentation slides about Soliloquy can be found on the Web only to those who very clearly know what they are looking for (on the ETSI website, where these files are exclusively located, direct links to them are not detected).
But the most unpleasant is otherwise.
If anyone interested wants to get acquainted with the very article of scientists of the open community, which greatly impressed the British intelligence service, it quickly becomes clear that it is not so easy to find it. This article is not only on the site of scientific preprints Arxiv.org, where for a long time, along with physicists and mathematicians, both computer scientists and cryptographers are published. It is also not on the specialized site of purely cryptographic preprints Eprint.iacr.org, owned by IACR, or the International Association of Cryptographic Research. Moreover, each of the authors of the article we are interested in has many other publications on this and the other or even both of these sites.
But there is not only the work we need. Strange, but true.
Worse, if you set off to search for a file on the researchers ’personal web pages on university sites, an ambush awaits there too. The most famous of the co-authors, Aleksey Kitaev, is famous as a superstar in the horizon of quantum computing, has only a purely tangential relation to cryptography, and does not accumulate links to files of his publications anywhere.
Another co-author, Sean Holgren, really known as a cryptographer, like many other researchers, used to be used to post links to his publications on a university web page. But it was precisely on the article we were interested in that this case suddenly stopped. For all previous articles, files are available, but for the right one - only the name. For all subsequent publications 2015-2016. not even a name. Although such works are found in preprint archives ...
A truly complete list of everything that was, is, and will even be done (with appropriate links to files) is found only on the site of the youngest of the co-authors - named Fang Song. But, significantly, not on his university web pages, but on his personal website FangSong.info. And even here strange losses are revealed. We still have the PDF file with the variant of the article we are looking for, however, links to about the same file, but with names like "full version" and "Arxiv.org" turn out to be broken, looping back to the main page. That is, the files were clearly laid out by the author, but even here - as on the ArXiv site - inexplicably disappeared ...
All “disappearances” of this kind (quite a lot of similar cases) can be considered only with a very naive and superficial view of things. Most often, the explanation of what is happening is already contained in the headings of the articles, where the authors (in accordance with the rules instituted by scientists for a long time) are obliged to indicate the sources of financing and grants for the money of which the studies were conducted.
Specifically, in our case, the sponsor of the uniquely outstanding article on the new method of quantum cryptographic attack is (surprise!) The US National Security Agency. Well, "whoever pays for it dances," as you know. It is clear that the authors of the study themselves are always interested in the wide dissemination of their results, but their sponsors often have directly opposite goals ...
The only dark and really important point that has not yet been covered in this entire story is this.
What can be the relationship between the new, very effective (and very impressive special services) algorithm for opening all kinds of cryptosystems using a hypothetical quantum computer, and the hasty steps of the NSA to remove (back in 2015-2016) from cryptography circulation on elliptic curves? The connection here, as it turns out, is completely direct. But in order to notice it, again, one must carefully monitor what is happening.
When, at the turn of 2014-2015, the open community just became aware of the post-quantum Soliloquy algorithm from the British intelligence service, its subsequent compromise and the parallel invention of quantum attack, one of the very competent and knowledgeable cryptographers, Dan Bernstein, made an interesting generalization:
https://groups.google.com/forum/#!topic/cryptanalytic-algorithms/GdVfp5Kbdb8
Comparing all the facts known at that time, Bernstein put forward the assumption that in fact the new quantum algorithm from Holgren, Fang Song (and the company) also indicates the path to significantly more powerful attacks using traditional classical computers.
Moreover, on the basis of well-known, but very vague comments by the British, Bernstein concluded that the British special services know this, but prefer to keep it secret from everyone ...
And we know what happened afterwards. A few months later, in August 2015, the NSA suddenly surprised the whole cryptographic world with its sharp rejection of ECC cryptography with a relatively short key length.
The only ones who were hardly surprised were probably the cryptographers of the British intelligence service.
Well, six months later, at the beginning of 2016, already in the open cryptographic community, at least two independent publications from scientific researchers appeared, which in the most general terms confirmed Dan Bernstein's assumption:
1) Ronald Cramer, Léo Ducas, Chris Peikert, Oded Regev. "Recovering Short Generators of Principal Ideals in Cyclotomic Rings." In Eurocrypt 2016;
2) Jean-François Biasse and Fang Song, "Efficient quantum algorithms for computing class groups and solving the principal ideal problem in arbitrary degree number fields". In 27th ACM-SIAM Symposium on Discrete Algorithms).
In other words, it has now been rigorously and for everyone shown that yes, indeed, the new purely “quantum” approaches to solving difficult cryptographic problems, in fact, can significantly reduce labor costs when breaking cryptoschemes using classical computers.
Specifically, nothing has been openly announced yet about compromising the ECC scheme.
Or maybe you don’t need to do this?
Let's think together whether this is beneficial to the one who is aware?
But this, it seems, is only a matter of time.
I am amazed no one has mentioned there microsoft cause it's one of the early adopter among huge companies. Passwordless authentication is good at some point cause makes it's more harder to get victim of hackers or phishing and etc thanks to Multi Factor Authentication. I think if you are interested in it, you must read what's written on this page of Microsoft and also watch videos, link here:
https://www.microsoft.com/en-us/security/technology/identity-access-management/passwordlessI agree with OP, we really need something like that and I am amazed why some companies haven't even think about that, especially Ledger and etc which aim security of crypto wallets.
------------------------
I read the Microsoft passwordless authentication materials, but in fact there is multi-password authentication, without innovations.
What can we say about Microsoft - it is always true to its traditions, making strange software. Their main product is Windows OS, always in holes, monthly, weekly, until its change, they update it, always hundreds of holes in the security system. If I managed such a company, I would hide my face.
It has long been noticed that the higher the salary, the less time left for reflection.
They faithfully combined all the old authentication technologies that they knew in one software product, only made their protocol and a model document for sale, for advertising. The perfect endless business scheme.
By the way, I accidentally thought, is not their main goal money?
These guys can sell something that no one else can sell.
Seriously, biometrics are the easiest fake identifier. This is a lot of news from serious organizations with a demonstration of experiments. I do not want to advertise it all. Anyone who wants to find himself (and in the public domain as well) programs that will depict both your faces, your “fingers” and your “eyes”. This is generally primitive. Of all that they crammed into their "passwordless" authentication, the most reliable element is the password and its semantic analogue is the key.
Having made a mistake, they write the opposite, on the first page of their advertising document, the following:
Passwords are no longer enough IT around the world see the beginning of a new era, where passwords are considered as a relic of the past. The costs now outweigh the benefits of using passwords, which increasingly become predictable and leave users vulnerable to theft. Even the strongest passwords are easily phishable. The motives to eliminate authentication systems using passwords are
endlessly compelling and all too familiar to every enterprise ITorganization. But how do you get there?
For enterprise IT departments, nothing costs more than password support and maintenance. It’s common practice for IT to attempt lessening password risk by employing stronger password complexity and demanding more frequent password changes. However, these tactics drive up IT help desk costs while leading to poor user experiences related to passwordreset requirements. Most importantly, this approach isn’t enough for current cybersecurity threats and doesn’t deliver on organizational information security needs.
It is difficult to understand ingenious people, especially what they do.
I sure nobody still invented better than OAuth2 over HTTPS. It is absolutely simple and it really works
---------------------------
As I answered you earlier, OAuth 2.0 authorization. Is a protocol created on the basis of dangerous legacy technologies.
Now you can expand the answer, so that it would be clear that the new names to regret do not guarantee new qualities for the user.
But the essence is well confused.
Here is material from common sources, I am not the author of these thoughts:
The third generation of OpenID technology, which is an authentication add-on over the OAuth 2.0 authorization protocol. OpenID Connect allows Internet resources to verify the identity of the user based on the authentication performed by the authorization server.
one.
Phishing attacks. Some researchers believe that the OpenID protocol is vulnerable to phishing attacks when instead of a provider, attackers send the end user to a site with a similar design ... As a result, attackers can present themselves to Internet resources as a given user and gain access to their information stored on these resources.
Phishing attacks are also possible when a site that supports OpenID authentication is faked in order to obtain user information from the provider.
Important:
OpenID does not contain mechanisms to prevent phishing attacks. Responsibility for phishing attacks is shifted to OpenID providers.
2.
Man in the middle attack with an unprotected connection.
... To redirect the user from himself to the Internet service, the provider gives the user a special URL. The problem is that anyone who can get this URL (for example, by sniffing a twisted pair) can play it and gain access to the site as a user.
3.
Some providers use Nonce code to protect against this attack, which allows you to use this URL only once. The nons solution only works when the User first uses the URL. However, an attacker who is listening on the communication channel and is located between the user and the provider can obtain the URL and immediately terminate the user's TCP connection, and then perform an attack. Thus, one-time codes protect only from passive intruders, but cannot prevent the attacks of an active attacker.
4.
Reuse of identifier.
The user can change the OpenID provider, thus freeing his identifier from the previous provider. A new user can take this identifier and use it on the same sites as the previous user. This will give the new user access to all the information associated with this identifier. This situation may occur by accident - it is not necessary that the new user be an attacker and want to gain access to the specified information.
5.
Authentication Errors.
In 2012, researchers published a paper describing two vulnerabilities in OpenID. Both vulnerabilities allow an attacker to gain access to the victim’s account.
The first vulnerability exploits the OpenID Attribute Exchange. The problem is that some Internet services do not check the data transmitted through Attribute Exchange. According to the researchers' report, many popular sites, including Yahoo! Mail
The second vulnerability is related to an error on the provider's side and also allows access to the account on the site of the dependent party.
So how many old do not form, you will not receive good new.
I sure nobody still invented better than OAuth2 over HTTPS. It is absolutely simple and it really works
-------------------
And these are facts confirming the above about the quality of Microsoft OAuth 2.0!
Do you think they all tell us that there is a hole in it?
Read:
Security researchers from CyberArk, an Israeli company, have discovered a vulnerability in the Microsoft Azure cloud service. The problem affects certain applications that use the Microsoft OAuth 2.0 authorization protocol, and its operation allows you to create tokens for entering the system. In this way, attackers can take control of victims' accounts and act on their behalf.
Experts have discovered several Azure applications released by Microsoft that are vulnerable to this type of attack. If an attacker gains control over domains and URLs that Microsoft trusts, these applications will allow him to trick the victim into automatically generating access tokens with user permissions. It is enough for the criminal to use simple methods of social engineering to force the victim to click on the link or go to a malicious website. In some cases, an attack can be carried out without user interaction. A malicious web site that hides the embedded page may automatically trigger a request to steal a token from a user account.
Such applications have an advantage over others, as they are automatically approved in any Microsoft account and, therefore, do not require user consent to create tokens.
Be careful with products that advertise "software authorities."