top of page



The Garibay Institute’s Creativity Masterclass & Workshop aims to instill creativity frameworks and orthogonal practices to achieve higher levels of performance with global leaders in corporate, academic, finance and entertainment and policy.


The masterclass and workshop is led by renowned record super producer, polymath and creativity pioneer Fernando Garibay and his team. The core curriculum is based on his “Creativity as a Skill” principles and framework. Fernando’s unique orthogonal approach distills creativity into its basic form while zooming in and out of latest research in a vast array of disciplines; i.e., neuroscience, behavioral economics, neuroeconomics, evolutionary psychology, intergenerational inherited trauma/intelligence transmission, epigenetics, eastern/western philosophy et. al. While cross-referencing his case studies and culture shifting work making hits with Lady Gaga, Shakira, Whitney Houston, Sia, Paris Hilton, Lizzo, etc. Fernando showcases how creativity and hyper-sensing are skills that can and must be taught, honed and mastered - for the evolution of our inner- outer-self; our interconnectedness; and for the betterment of humanity.


We recently had the pleasure of teaching and inspiring a long list of Young Global Leaders and friends at The Garibay Institute on Jan 28/29 2023. This was a very special Global Leaders edition in which we used Hit-Songwriting and Music Production as a proxy and gateway to the creative experience. “We set out to prove that everyone can be a creative being; what we accomplished was this shift in perception and the birth of new artists” -Fernando Garibay


Thank you Lily, Soulaima, Peggy Liu for all the support and help producing this event; and a special thank you to all our new YGL artists.






YGL Artist Participants (January 28, 2023)

Kristine Stewart


Zaib Shaikh


Chris Antegeka


Lily Lapenna


Alice Jacobs


Winston Damarillo


Claudia Massei


Mei Wen


Maria del Castillo


Geogie Giner


Pam Ros Damrillo


Jaime Nack


Fernando Grostein Andrade


Andrea Carafa


Soulaima Gourani Fernando Siqueira









YGL Artist Participants (January 29, 2023)


Soulaima Gourani


John Kim


Robin Merritt


Shashank Sripada


Ramazan Nanayev


Chirag Sagar


Vimbayi Kajese


Sara Sutton


Laura Walker Lee


Yiwen Li


Jeremy Miller


Peggy Liu

Updated: Aug 18

Originally Published on THE HILL 06/05/23 by James Cooper and Kashyap Kompella, opinion contributors 06/05/23 02:00 PM ET




Last month, Sam Altman, CEO of OpenAI, the company that gave the world ChatGPT and all the headaches thereafter, pleaded before the United States Senate Judiciary subcommittee for the U.S. Congress to regulate “increasingly powerful” artificial intelligence (AI) systems. A week later, the Biden-Harris administration announced new efforts “to advance the research, development and deployment of responsible AI that protects individuals’ rights and safety and delivers results for the American people.”


This new missive builds on the administration’s AI Bill of Rights from October 2022 and Senate Majority Leader Chuck Schumer’s (D-N.Y.) framework for AI legislation in April 2023. However, none of these efforts constitute actual legislation; it sounds remarkably similar to a university faculty or corporate meeting in which one plans to make a plan. Other than state/local-level legislation like that in New York City to prevent bias in employment decisions due to AI, most AI regulatory efforts thus far have been advisory in nature. Is this current patchwork the best way to approach regulation on something that is poised to alter our information economy, impact knowledge worker employment, and exacerbate the ills of technology on society?


As U.S. lawmakers undertake their revolving merry-go-round of confabulations, the European Parliament is on the verge of enacting the European Union Artificial Intelligence Act. While the EU AI Act contains innovative ideas that can inform global regulatory efforts, such as rules for different AI systems based on their respective levels of threats, it risks creating a regulatory chokehold on the technology industry. Regulations have unintended consequences; take the EU’s data protection law, the General Data Protection Regulation (GDPR). Only deep-pocketed companies have the resources to follow its complex rules, tilting the scales in favor of Big Tech while small businesses struggle to comply. The GDPR shows mixed results at best that should give us pause.


The Chinese approach to the technology sector has been to champion local companies and create a blooming walled garden. In AI, the approach of the People’s Republic of China is similar; the country’s political leaders and its battery of state-owned enterprises and academic institutions are focusing on AI-with-Chinese characteristics, hoping to win the global AI race. In May 2019, the Beijing AI Principles were published by the Beijing Academy of Artificial Intelligence, an organization backed by the Chinese Ministry of Science and Technology and the Beijing municipal government, in addition to companies including Tsinghua University, Peking University and the country’s biggest tech behemoths, Alibaba, Baidu and Tencent. China has also been quick off the block to regulate generative AI, seeking to impose measures like prior government approval before the release of any ChatGPT-like products. As China accelerates its international trade and investment policy — the Belt and Road Initiative — its rules on AI may also get exported to countries around the world.


Countries as diverse as the Brazil, Canada, Germany, Israel, India and the United Kingdom are developing national strategies for the ethical use of AI as well. Among the Gulf countries, the United Arab Emirates, Saudi Arabia and Qatar have also outlined national AI strategies and roadmaps. Our analysis of these efforts makes one thing very clear: No country has it all figured out yet. AI requires updates to our regulatory approach and upgrades to our risk architectures.


AI is both a horizontal technology with broad applications and a dual-use technology that can be put to both good and bad uses. The current approach outside the EU has been mostly about the establishment of guidelines (except in China), but these are not binding, nor do they carry penalties for transgressions. Without enforcement, there is little point in having rules that do nothing but codify norms.


But rules are not the only way to enforce important norms for society. There are other ways to regulate AI research, development and deployment short of innovation-stifling regulation.


Self-regulating organizations (SROs), wherein industry participants voluntarily establish standards and best practices and agree to abide by them, are one such mechanism by which AI researchers and merchants can be held accountable, albeit to each other. If such SROs have sanction ability, like the Financial Industry Regulatory Authority does for the financial system in the United States, all the better. Organizations that declare their support for and compliance with ethical AI principles and standards would be a great start. Independent audits and third-party certifications of compliance to standards could then define the next level of scrutiny.


There already exists a myriad of product safety, consumer protection and anti-discrimination laws, which apply to products and services that embed AI. When such systems make mistakes, the consequences relate to the context and use case. As an example, autocorrect not working properly carries low stakes; facing charges for a crime because of an AI error, on the other hand, carries massive impact and must be avoided. The bar for AI must be high when the cost of errors and consequences of mistakes is high. That is exactly the level-of-risk approach to regulation currently being considered in the EU. As being contemplated in the U.K., sector-specific regulation can bring contextual granularity to regulation.


Ultimately, regulation has to balance multiple objectives: citizen rights, consumer welfare, technology innovation, economic interests, national security and geopolitical interests, among others. It needs to consider the current and the future. Its scope is local and global. As such, AI rules must align with existing regulatory ethos, institutions and capabilities. AI regulation has to be strategic, not trend-chasing, nor based on the latest shiny AI tool.


Post-ChatGPT, the chorus for AI regulation is reaching a crescendo. The one race that no one should win, however, is the race to the regulatory bottom.


James Cooper is a professor of law at California Western School of Law in San Diego.


Kashyap Kompella, CFA, is CEO of RPA2AI Research and a visiting professor for artificial intelligence at BITS School of Management (BITSoM).

Originally published on The Messenger by James Cooper, Fernando Garibay and Kashyap Kompella



For decades, the United States and other northern industrialized countries have moved their economies from manufacturing to services. They offshored the smokestacks of yesteryear to the developing world and with them, the black lung disease, industrial accidents and labor strife. Clean rooms and hackathons became the workshops as the developed world moved to a knowledge-based, post-industrial economy. This globalization bargain worked through the 1990s and 2000s. But such a post-Fordist model benefits the developed world only if its innovators do the designing, inventing, licensing and royalty collection that come with leadership in disruptive technologies.


Indeed, the post-Cold War global trading regime, typified by the World Trade Organization and regional and bilateral agreements the U.S. signed with trading partners, was based on the understanding that the developing world would host low-skilled factories and the developed world would produce knowledge and commercialize it. The knowledge-based economy has created a world in which intellectual property became king and the industrialized north created new “things” — the internet, GPS, 5G, the mobile telephone, semiconductor-chip designs, social media, pharmaceutical breakthroughs and other game-changing technologies. Knowledge was incentivized, celebrated, leveraged and rewarded. Sometimes it went into the public domain, becoming part of the collective commons; other times it went public (as in, an initial public offering).


But with the advent of artificial intelligence (AI), our knowledge increasingly risks being commodified and supplanted by machine learning. Whether it is law, medicine, accounting, finance, scriptwriting, music or a host of other services, AI is making rapid and unsettling changes. If the current pace of progress in AI continues, there will be a question mark over the role of human workers in the new economic paradigms centered around algorithms and automation. Why should companies pay for health care, pensions and other social safety net niceties when algorithms can make decisions and machines can work around the clock and not complain, unionize, take vacations or waste time gossiping at the proverbial water cooler? Such a disruption instigates the need for a “cultural worker” — a digital artisan who redefines and elevates our human value.


In the future, we may witness the current archetype of knowledge worker diminishing in importance, ceding the space for this new kind of creative and economic agent. We need to return to something akin to the Renaissance, a period during which the arts changed society. Humanity proved its worth by expressing knowledge and then intelligence, eventually ending in wisdom and enlightenment. Works of art and other artifacts of beauty were created in this process. The era of synthetic intelligence now knocking on our doors requires us to reimagine and retool the human part of the production cycle. We need to turn the ingenuity, creativity and tacit knowledge acquired through the march of human civilization into actionable and valuable outputs.


Humans must embody the value proposition that cannot be replicated by machines. We have experienced the shaky end of the post-Industrial Revolution with its polarized reaction to globalization, as seen through recent regressions to isolationism, xenophobia, populism and scapegoating. We have also witnessed society’s yearning for a return to more traditional, holistic wellness, accompanied by health and spirituality. We must synthesize the best of past eras and look forward to a time of new harmony, when machines will do more of our grunt work but not the soul-searching, emotional fulfillment for us. Many people suggest regulating AI but we also need to rethink the human-machine partnership.


AI is not replacing us; it complements what humans do. To collaborate with synthetic intelligence, we must develop an early line of defense. If government, industry and academia plan and prepare, we could avoid short-term employment shocks and displacement.


Creativity provides us the cognitive faculties to distill a more meaningful and purposeful life. As long as our souls and consciousness cannot be quantified or qualified, humans will always play a role. This new renaissance sees machines as helpers, not hinderers. We can coexist. This is not a zero-sum game but a game of zeros and ones. We are the ones.



James Cooper is professor of law at California Western School of Law in San Diego and was a U.S. delegate to the World Intellectual Property Organization’s Advisory Committee on Enforcement.


Fernando Garibay is a former executive, producer and artist at Interscope and founder of the Garibay Center, a global creativity research institute.


Kashyap Kompella is CEO of RPA2AI Research and a visiting professor for artificial intelligence at BITS School of Management (BITSoM).

bottom of page