In recent months we have verified that Artificial Intelligence (AI) systems are tremendously efficient. But also during this time we have come to the conclusion that it would be difficult to blindly trust them. That at least is what happens inside Manzana where they have restricted the use of ChatGPT.
This generative model developed by OpenAI, which in turn is currently owned by Microsoft, has proven to be a platform that could revolutionize the workspaces of many companies globally.
But at the same time there has been an intense debate about the real reliability of this type of system and that seems to be a factor of concern for Apple.
It is enough to use ChatGPT, Bing Chat or similar assistants more than once to realize that these AIs are very convincing in their responses, but are often inaccurate, manipulative or outright lying.
To this we need to add a factor that now seems to be of great concern for large corporations in the IT sector: the possibility that Artificial Intelligence (AI) is a vanishing point that can lead to leaks of the most secret projects of any company. .
It seems that the boys from Cupertino did not like this last detail in the slightest
Apple prohibits the use of ChatGPT among its employees
an article of The Wall Street Journal reveals from internal sources and company documents to which the newspaper had access that Apple has strongly restricted the use of ChatGPT and Artificial Intelligence coding tools such as GitHub Copilot among its employees.
The reason behind this ban would be the fear of leaking sensitive data to external sources. Although the specific reason would not be a suspicion of corporate espionage, but rather the uncertainty about the real security of the architecture of that AI.
In the strict sense, ChatGPT is an extensive language conversational model (LLM) developed by OpenAI and financially backed by Microsoft. In fact, currently, the ChatGPT AI model only runs on OpenAI or Microsoft servers and is accessible via the Internet.
That gap and the final location of the servers would be what the Cupertino boys are concerned about. Ultimately the concern would be that sensitive data could be exposed on such servers due to the nature of the system and how it works.
Since in addition to the detail of the servers is the fact that the model is reviewed many times by other developers to improve its processing in the cloud and provide assistance in future improvements of the AI model.
There are plenty of reasons and precedents where it has already been documented as errors from the past of ChatGPT, it has exposed information from chat histories and other details.
For now, Apple is also working on the development of its own artificial intelligence technology similar to ChatGPT.