Virgin Technology Summit
The Virgin Technology summit is a two event that starts today follow this link to join virtually:
https://hopin.to/events/global-aerospace-summit
The Virgin Technology summit is a two event that starts today follow this link to join virtually:
https://hopin.to/events/global-aerospace-summit
Alka Jarvis needed to make sure this interview happened.
We had scheduled several meet-ups before this chat, and for some reason or another— either an absence of Wi-Fi, unexpected meetings—something had always come up.
“I got up today and I thought, I must make sure I attend this meeting,” Jarvis tells me during our interview, “At any expense.”
It’s this kind of tenacity that finally brought us to our chat—the same tenacity that has sparked fire to Jarvis’ career throughout her time as an engineer.
“Why don’t you do something about it?”
Jarvis, who was born and raised in Nairobi, Kenya, is of Indian ancestry. She says that culturally, she grew up in Kenya under a lot of British influence. The engineer—who is now known as Cisco’s only Distinguished Quality Engineer–first entered the world of quality in a fateful job on a sales team of a small company.
“I accompanied the sales teams as they did demos,” says Jarvis, “The sales person would try to show customers certain files but the files weren’t there. With my experience in computer science and software development, I continued to raise concerns: It was a small company and I was the only one saying that our software doesn’t work. One time I was finally challenged by the senior VP of operations, ‘Why don’t you do something about it?'”
It was that one challenge that got Jarvis on her path towards quality assurance.
“I went to Golden Gate University Library and learned everything I could about software testing and quality assurance,” says Jarvis. At that time she also co-founded Bay Area Quality Assurance Association by contacting companies through the Yellow Pages to discuss the field of software quality and processes.
Software quality assurance, Jarvis tells me, is making sure that all of the products and services work according to the customer requirements and exceed their expectations.
Learning to teach
In her in-depth studies of quality assurance, Jarvis started looking at the UC Berkeley-Extension curriculum in software engineering. She found out that all of the classes taught programming but none were actually about quality and testing. After approaching one of the Directors at UC Berkeley, the school asked her to develop and teach a class.
After 21 years, Jarvis is still manning teaching podiums and teaching quality at UC Berkeley, UC Santa Cruz-Extension, Santa Clara University, and has published seven books.
“This is where Cisco shows industry leadership,” says Jarvis, “When an employee teaches or participates in industry events—it shows Cisco as the leader. This type of industry participation makes students and people more aware of Cisco and of the talents we have within the company.”
Quality work
Persistence, Jarvis stresses, is what got her to this point in her work. She encourages others also to take an active pursuit of their interests in life and career.
“If you have a passion for any topic within Cisco, take it upon yourself and go after it,” says Jarvis, “You are the master of your career. It might just seem like a very small thing right now that you’re involved in. I was only on the sales team to watch demos, but I thought that I had to get involved and did not stop at that.”
And she doesn’t plan to stop any time soon. Her next venture into telemetry—within the Technology and Quality functional organization of Supply Chain—has just begun. Jarvis would like to emphasize on leading indicators like “predictive” telemetry to track issues and stay steps ahead of customers.
If we can predict the way Jarvis has pursued her career, and even this interview, we can expect big things for her in this next step.
Source: Cisco
What it will take for us to trust AI
Like humans, computers need to behave as we would expect
By Guru Banavar, IBM Research
The early days of artificial intelligence (AI) have been met with some very public hand wringing. Well-respected technologists and business leaders have voiced their concerns over the (responsible) development of AI. And Hollywood’s appetite for dystopian AI narratives appears to be bottomless. This is not unusual, nor is it unreasonable. Change, technological or otherwise, always excites the imagination. And it often makes us a little uncomfortable.
But in my opinion, we have never known a technology with more potential to benefit society than artificial intelligence. We now have AI systems that learn from vast amounts of complex, unstructured information and turn it into actionable insight. It is not unreasonable to expect that within this growing body of digital data — 2.5 exabytes every day — lie the secrets to defeating cancer, reversing climate change, or managing the complexity of the global economy.
We also expect AI systems to pervasively support the decisions we make in our professional and personal lives in just a few years. In fact, this is already happening in many industries and governments.
However, if we are ever to reap the full spectrum of societal and industrial benefits from artificial intelligence, we will first need to trust it.
Trust of AI systems will be earned over time, just as in any personal relationship. Put simply, we trust things that behave as we expect them to. But that does not mean that time alone will solve the problem of trust in AI. AI systems must be built from the get-go to operate in trust-based partnerships with people.
The most urgent work is to recognize and minimize bias. Bias could be introduced into an AI system through the training data or the algorithms. The curated data that is used to train the system could have inherent biases, e.g., towards a specific demographic, either because the data itself is skewed, or because the human curators displayed bias in their choices. The algorithms that process that information could also have biases in the code, introduced by a developer, intentionally or not. The developer community is just starting to grapple with this topic in earnest. But most experts believe that by thoroughly testing these systems, we can detect and mitigate bias before the system is deployed.
Managing bias is an element of the larger issue of algorithmic accountability. That is to say, AI systems must be able to explain how and why they arrived at a particular conclusion so that a human can evaluate the system’s rationale. Many professions, such as medicine, finance, and law, already require evidence-based audit ability as a normal practice for providing transparency of decision-making and managing liability. In many cases, AI systems may need to explain rationale through a conversational interaction (rather than a report), so that a person can dig into as much detail as necessary.
In addition, AI systems can and should have mechanisms to insert a variety of ethical values appropriate to the context, such as the task, the individual, the profession, or the culture. This is not as difficult as it sounds. Ethical systems are built around rules, just like computer algorithms. These rules can be inserted during development, deployment, or use. And because these are learning systems, researchers believe that AI systems can, over time, observe human behavior to fill in some of the gaps.
It is incumbent upon the developers of AI systems to answer these questions in a way that satisfies both the industry and the general public. This is already well understood throughout the technology industry, which is why IBM is working together with some of its fiercest competitors — including Google, Microsoft, Amazon and Facebook — on the “Partnership on AI,” a unique and open collaboration designed to guide the ethical development of artificial intelligence.
Business leaders considering artificial intelligence solutions should include trust and accountability as part of their criteria for adoption. They should be thoughtful about how and where this technology is introduced throughout the organization. And they should work with their technology vendors to identify any unwanted behaviors and correct them if necessary.
But delaying the implementation of artificial intelligence is not an option. We pay a significant price every day for not knowing what can be known: not knowing what’s wrong with a patient, not knowing where to find a critical natural resource, or not knowing the hidden risks in the global economy. We believe that many of these ambiguities and inefficiencies can be eliminated with artificial intelligence.
Artificial intelligence is an undeniably powerful technology. And as with any powerful technology, great care must be taken in its development and deployment. Just as it is our obligation to apply this technology to complex, societal problems, it is our obligation to develop it in a way that engenders trust and safeguards humanity. In other words, building trust is essential to the adoption of artificial intelligence.
And we believe that its adoption is essential to humanity.