This claim may be incorrect.. there is 535 members between the two houses of Congress

how can you tell if an insurance company is lying to you… they tell you they are trying to save you money

Does this sound familiar ?

It has been reported that inflation over the last TWO YEARS has increased the average expenditures of a family by over $400/wk… Many small restaurants are still trying to recover from all the business shutdowns during the pandemic and increased operational costs because of inflation.  This new IRS mandate is imposing on small restaurant owners a lot more administrative costs to comply.  President Biden had proposed hiring 87,000 new (ARMED) IRS agents to go have the rich to get them to pay “their fair share”  – whatever that is. But now the IRS is going after the portion of our population that has the least resources to fight off an invasive IRS audit.

What other part of our population – that has limited resources – that several parts of the FED ALPHABET SOUP OF AGENCIES are taking advantage of this population and causing are kinds of QOL issues to those in this population directly and indirectly ?

Is it just me, or does it seem like all too many school yard bullies … really don’t grow up and mature, but gets jobs somewhere in various bureaucracies where they can continue to “push people around without consequences ” ?

 

After pushing for tip credit elimination, Biden wants to increase taxes on tipped workers

https://minimumwage.com/2023/02/after-pushing-for-tip-credit-elimination-biden-wants-to-increase-taxes-on-tipped-workers/

In his state of the union address this week, President Joe Biden doubled down on a mantra that no one earning less than $400,000 a year would pay new taxes. But the administration’s Internal Revenue Service quietly rolled out a new program to single out tipped restaurant workers to pay more taxes on tip earnings.

The IRS released a proposal for public comment on a new program that would “improve tip reporting compliance” to ensure more tips are reported as income, and taxed accordingly.

The program, called the Service Industry Tip Compliance Agreement (SITCA), seeks to use point-of-sale and electronic payment technology to track tip income received by service employees. Employers would be required to submit an annual report on tips received through these systems. This ultimately opens up employees’ tip income to IRS scrutiny and taxation.

Lawmakers rushed to voice their opinion on the proposal this week:

 

 

 

 

While the proposed program states that participation is voluntary, the IRS official announcement states participation in the program would grant protection from liability for taking a tip credit and counting tips towards minimum wage requirements. Currently, the federal Fair Labor Standards Act legally allows employers to take a tip credit as long as employees earn regular tips that amount to at least the $7.25 hourly minimum wage.

The Biden administration has vocally opposed the tip credit system that currently allows tipped service employees, many in the restaurant industry, to earn well beyond the minimum wage. President Biden championed the Raise the Wage Act in 2021, which would have eliminated the federal tip credit, and failed when a bipartisan group of lawmakers voted against the bill due to the harm it would cause tipped restaurant employees.

Now, the administration is using its IRS resources to go after employees’ tips instead of larger, prevalent issues of tax fraud.

 

DAILY REMEDY INTERVIEWS DR. NEIL ANAND, MD, ON PHYSICIAN ARREST AND DEA-DOJ DATA ANALYTICS AS A “JUNK SCIENCE”

DAILY REMEDY INTERVIEWS DR. NEIL ANAND, MD, ON PHYSICIAN ARREST AND DEA-DOJ DATA ANALYTICS AS A “JUNK SCIENCE”

https://youarewithinthenorms.com/2023/02/10/daily-remedy-interviews-dr-neil-anand-md-on-physician-arrest-and-dea-doj-data-analytics-as-a-junk-science/

REPORTED BY

youarewithinthenorms.com

NORMAN J CLEMENT RPH., DDS, NORMAN L. CLEMENT PHARM-TECH, MALACHI F. MACKANDAL PHARMD, BELINDA BROWN-PARKER, IN THE SPIRIT OF JOSEPH SOLVO ESQ., INC.T. SPIRIT OF REV. IN THE SPIRIT OF WALTER R. CLEMENT BS., MS, MBA. HARVEY JENKINS MD, PH.D., C.T. VIVIAN, JELANI ZIMBABWE CLEMENT, BS., MBA., IN THE SPIRIT OF THE HON. PATRICE LUMUMBA, IN THE SPIRIT OF ERLIN CLEMENT SR., WALTER F. WRENN III., MD., JULIE KILLINGWORTH, LESLY POMPY MD., CHRISTOPHER RUSSO, MD., NANCY SEEFELDT, WILLIE GUINYARD BS., JOSEPH WEBSTER MD., MBA, BEVERLY C. PRINCE MD., FACS., NEIL ARNAND, MD., RICHARD KAUL, MD., LEROY BAYLOR, JAY K. JOSHI MD., MBA, ADRIENNE EDMUNDSON, ESTER HYATT PH.D., WALTER L. SMITH BS., IN THE SPIRIT OF BRAHM FISHER ESQ., MICHELE ALEXANDER MD., CUDJOE WILDING BS, MARTIN NJOKU, BS., RPH., IN THE SPIRIT OF DEBRA LYNN SHEPHERD, BERES E. MUSCHETT, STRATEGIC ADVISORS

EXPOSING DOJ-DEA MANIPULATION DATA ANALYTIC-JUNK SCIENCE

THE TARGETING OF OUTLIERS

DEA’S STRONG-ARMING OF THE MEDICAL PROFESSION 

The canary in the coal mine is a metaphor alluding to the initial sign of an impending calamity. An apt metaphor for what just transpired. Recently, the Drug Enforcement Agency (DEA) issued a public safety alert informing the public about a “sharp increase in fake prescription pills containing Fentanyl and Meth[amphetamines].”

The alert was the canary in the coal mine, but the impending calamity is not what we are led to believe. We have known of a rise in counterfeit prescription pills for many years. It was a rise induced directly by DEA policies inflicted upon patients.

THE DEA ORCHESTRATED A CULTURE OF FEAR THAT CAN BE DESCRIBED AS MEDICAL MCCARTHYISM, IN WHICH THE FEAR OF PROSECUTION DEFINED THE QUALITY OF CARE.”

The impending calamity is not the risks counterfeit pills pose to the public but the legal liability the DEA faces for its failed policies.

For years federal agencies and physician advocacy groups remained in lockstep on the causes of the opioid epidemic. We heard it all: lax prescription guidelines, overprescribing by willing physicians, and the addictive nature of opioids.

A CULTURE OF FEAR AND INTIMIDATION

Unfortunately, these purported causes proved erroneous at best or deliberately misrepresented at worst. And implementation of policies based on these causes led to a rise in suicide and clinical stigmatization among chronic pain patients and patients with substance use dependencies.

FOR NOW, YOU ARE WITHIN

YOUAREWITHINTHENORMS.COM,(WYNTON MARSALIS CONCERTO FOR TRUMPET AND 2 OBOES, 1984)

THE NORMS

Surescripts: Gatekeeper for Electronic Prescribing Explained

Are AI Chatbots in Healthcare Ethical?

Are AI Chatbots in Healthcare Ethical?

https://www.medpagetoday.com/opinion/second-opinions/102987

Their use must require informed consent and independent review

Within a week of its Nov. 30, 2022 release by OpenAI, ChatGPT was the most widely used and influential artificial intelligence (AI) chatbot in history with over a millionopens in a new tab or window registered users. Like other chatbots built on large language models, ChatGPT is capable of accepting natural language text inputs and producing novel text responses based on probabilistic analyses of enormous bodies or corpora of pre-existing text. ChatGPT has been praised for producing particularly articulate and detailed text in many domains and formats, including not only casual conversation, but also expository essays, fiction, song, poetry, and computer programming languages. ChatGPT has displayed enough domain knowledge to narrowly miss passing a certifying examopens in a new tab or window for accountants, to earn C+ grades on law school examsopens in a new tab or window and B- grades on business school examsopens in a new tab or window, and to pass parts of the U.S. Medical Licensing Examsopens in a new tab or window. It has been listed as a co-author on at least fouropens in a new tab or window scientific publications.

At the same time, like other large language model chatbots, ChatGPT regularly makes misleading or flagrantly false statements with great confidence (sometimes referred to as “AI hallucinations”). Despite significant improvements over earlier models, it has at times shown evidenceopens in a new tab or window of algorithmic racial, gender, and religious bias. Additionally, data entered into ChatGPT is explicitly stored by OpenAI and used in training, threatening user privacy. In my experience, I’ve asked ChatGPT to evaluate hypothetical clinical cases and found that it can generate reasonable but inexpert differential diagnoses, diagnostic workups, and treatment plans. Its responses are comparable to those of a well-read and overly confident medical student with poor recognition of important clinical details.

This suddenly widespread use of large language model chatbots has brought new urgency to questions of artificial intelligence ethics in education, law, cybersecurity, journalism, politics — and, of course, healthcare.

As a case study on ethics, let’s examine the results of a pilot programopens in a new tab or window from the free peer-to-peer therapy platform Koko. The program used the same GPT-3 large language model that powers ChatGPT to generate therapeutic comments for users experiencing psychological distress. Users on the platform who wished to send supportive comments to other users had the option of sending AI-generated comments rather than formulating their own messages. Koko’s co-founder Rob Morris reported: “Messages composed by AI (and supervised by humans) were rated significantly higher than those written by humans on their own,” and “Response times went down 50%, to well under a minute.” However, the experiment was quickly discontinued because “once people learned the messages were co-created by a machine, it didn’t work.” Koko has made ambiguous and conflicting statements about whether users understood that they were receiving AI-generated therapeutic messages but has consistently reported that there was no formal informed consent processopens in a new tab or window or review by an independent institutional review board.

ChatGPT and Koko’s therapeutic messages raise an urgent question for clinicians and clinical researchers: Can large language models be used in standard medical care or should they be restricted to clinical research settings?

In terms of the benefits, ChatGPT and its large language model cousins might offer guidance to clinicians and even participate directly in some forms of healthcare screening and psychotherapeutic treatment, potentially increasing access to specialist expertise, reducing error rates, lowering costs, and improving outcomes for patients. On the other hand, they entail currently unknown and potentially large risks of false information and algorithmic bias. Depending on their configuration, they can also be enormously invasive to their users’ privacy. These risks may be especially harmful to vulnerable individuals with medical or psychiatric illness.

As researchers and clinicians begin to explore the potential use of large language model artificial intelligence in healthcare, applying principals of clinical research will be key. As most readers will know, clinical research is work with human participants that is intended primarily to develop generalizable knowledge about health, disease, or its treatment. Determining whether and how artificial intelligence chatbots can safely and effectively participate in clinical care would prima facie appear to fit perfectly within this category of clinical research. Unlike standard medical care, clinical research can involve deviations from the standard of care and additional risks to participants that are not necessary for their treatment but are vital for generating new generalizable knowledge about their illness or treatments. Because of this flexibility, clinical research is subject toopens in a new tab or window additional ethical (and — for federally funded research — legal) requirements that do not apply to standard medical care but are necessary to protect research participants from exploitation. In addition to informed consent, clinical research is subject to independent review by knowledgeable individuals not affiliated with the research effort — usually an institutional review board. Both clinical researchers and independent reviewers are responsible for ensuring the proposed research has a favorable risk-benefit ratio, with potential benefits for society and participants that outweigh the risks to participants, and minimization of risks to participants wherever possible. These informed consent and independent review processes — while imperfect — are enormously important to protect the safety of vulnerable patient populations.

There is another newer and evolving category of clinical work known as quality improvement or quality assurance, which uses data-driven methods to improve healthcare delivery. Some tests of artificial intelligence chatbots in clinical care might be considered quality improvement. Should these projects be subjected to informed consent and independent review? The NIH lays out a number of criteriaopens in a new tab or window for determining whether such efforts should be subjected to the added protections of clinical research. Among these, two key questions are whether techniques deviate from standard practice, and whether the test increases the risk to participants. For now, it is clear that use of large language model chatbots is both a deviation from standard practice and introduces novel uncertain risks to participants. It is possible that in the near future, as AI hallucinations and algorithmic bias are reduced and as AI chatbots are more widely adopted, that their use may no longer require the protections of clinical research. At present, informed consent and institutional review remain critical to the safe and ethical use of large language model chatbots in clinical practice.

30 companies you don’t want to work for


Is this the corporate entity and its unhappy employees you are entrusting your healthcare to ?

can artificial intelligence system emulate common sense

I asked this question to this AI BOT  https://chat.openai.com  I just saw where this bot was able pass a medical licensing board/test.  There was a piece on TV this past week of a pilot program in a hospital where cameras were monitoring pts in this particular hospital unit and the cameras fed into a AI program and it was seeing if AI could detect changes in a pt’s health status based on body language and/or movement.

Back in 1965 Gordon Moore – co-founder of Intel – declared that “computer power” would DOUBLE EVERY TWO YEARS and their cost would be CUT IN HALF EVERY TWO YEARS… that became known as “Moore’s Law”. I bought my first TI calculator in the mid-70’s and my first personal computer in the late 70’s. With the book that came with the computer I taught myself how to write software and actually end up selling some ultra specialty software packages. I have had a front row seat to watch Moore’s law evolve.

we are witnessing the next quantum leap forward in how computers will impact our lives and Moore’s Law may now be as antiquated as those computers that were around when Gordon Moore established Moore’s law.  Artificial Intelligence (AI) is here and now.  Here is the answer I got from CHATGPT when I asked the following question. This BOT has only been around a few months and it has already passed a Medical Licensing test/board…

How long before AI is involved with Electronic Medical Records and helping – or dictating – to practitioners as to what is appropriate therapy and/or overseeing insurance/pbm prior approval  process.  I asked this BOT this question, because those people dealing with subjective diseases ( pain, anxiety, depression, ADD/ADHD, mental health ) have few/no lab tests to confirm that the pt actually has the medical issue and/or the impact the disease has on the pt’s QOL. This is where a practitioner’s common sense and clinical experiences comes in to play in developing a treatment plan for a pt.

to be quite honest, artificial intelligence involved in medical care…. scares the CRAP OUT OF ME !  I got a free Alexa a few years back and I have never plugged her in nor connected her to my home internet/router, I have never had much trust in her either.

can artificial intelligence system emulate common sense ?

Currently, artificial intelligence systems do not possess common sense in the same way that humans do. Common sense refers to a broad range of implicit knowledge and understanding that people have about the world and human behavior. It enables humans to make inferences and judgments in situations that they have not encountered before.

While AI systems have made significant progress in specific domains such as image recognition, language processing, and game playing, they do not yet have the ability to understand and reason about the world in the same way that humans do. To develop AI systems with common sense, researchers would need to find ways to explicitly represent and incorporate this type of knowledge and reasoning into AI algorithms.

2022 Letter to the American Medical Association

2022 Letter to the American Medical Association

http://https://www.daily-remedy.com/2022-letter-to-the-american-medical-association/

“The absurdity of our trial is acutely apparent when contrasting our credentials with that of the Government.”

Content has been preserved in its original form and syntax. Modifications were made only to protect the interest of private citizens and to facilitate readability.


November 18, 2022

 

Re: An Outcry Against the Evil Movement of Scapegoating Physicians

Jack Resneck, Jr., MD, President of AMA

James L. Madara, MD, CEO & Executive VP of AMA

AMA Plaza, 330 N. Wabash Ave. Suite 39300 Chicago, IL 60611-5885

 

Dear Drs. Resneck, Jr. and Madara:

 

I write to express my perpetual concern of the nation’s scapegoating-physician movement. (A few years ago, on several occasions, I had written to AMA, specifically, Dr. Barbara McAneny, the AMA President then, regarding the same issue.)

 

More than a decade ago, in his book, “The Criminalization of Medicine: America’s War on Doctors” (Praeger Publishers, 2008), Professor Ronald T. Libby, clearly articulated this evil scapegoating-physician movement:

 

“The government has made medical doctors scapegoats for the financial crisis of health care in the country and for the failed war on drugs. Physicians’ role as sacrificial lambs follows the long history of political scapegoats in the United States…Physicians have become the enemies of the country’s health care.” (p. 181)

 

“The physician as scapegoat is important to politicians both to blame for a major threat to society and it promises a relatively easy and inexpensive solution. The solution is to eliminate wealthy and corrupt doctors who threaten the nation’s health care. A tough law and order campaign against corrupt doctors therefore creates the illusion that the state is acting as the protector of the citizenry.” (p. 182)

 

The prosecution of Dr. J. Patrick Couch and myself exemplifies the movement of evil witch-hunt by the government. Dr. Couch and I were arrested in May 2015 during the “Operation Pilluted,” where 22 doctors and pharmacists from four Southern States were arrested. In 2016, we were indicted with a myriad of felonious charges including RICO conspiracy, drug trafficking conspiracies, multiple substantive drug dispensing, healthcare fraud conspiracy, kickback conspiracy, etc.. After a lengthy jury trial in 2017, we were convicted as “drug dealers” and “gangsters” and received 20 and 21 years of imprisonment, respectively.

 

Professor Libby in his book correctly pointed out the contributing role played by physicians themselves:

 

“Medical doctors fit the pattern of a group suffering from a collapse of their image. They do not recognize that there is a national political campaign against them for fraud, kickbacks, and drug diversion. Instead, they see themselves as victims of local vendettas by ambitious prosecutors, competitors, and troubled or greedy whistleblowers. Like other scapegoats, they tend to believe that their cases are individual and unique to them. THIS EXPLAINS THE FACT THAT PHYSICIANS TURNS THEIR BACKS ON OTHER DOCTORS who have been egregiously targeted by the government for investigation and prosecution. (emphasis added)

 

Indeed the absurdity of our trial is acutely apparent when contrasting the requisite specialty/subspecialty credentials of ours with that of the Government expert witnesses at our trial. Dr. Couch’s primary specialty was anesthesia; mine was physical medicine and rehabilitation. Both Dr. Couch and I were fellowship-trained in interventional pain management and had practiced full-time for over a decade at our own interventional pain clinic, Physicians’ Pain Specialists of Alabama (PPSA), Mobile, Alabama. Dr. Couch had achieved four board certifications, there of which were recognized by the American Board of Medical Specialties (ABMS); I had achieved eight board certifications, four of which were recognized by the ABMS. Both of us had the academic title of Adjunct Clinical Associate Professor from local medical institutions. Together Dr. Couch and I had over 120 publications under the PubMed.

 

By contrast, none of the three Government medical expert witnesses, namely Drs. G—- (AZ), A—-  (MS), and V—-  (MS), were fellowship trained; nor did they have any academic title. Dr. A—-  was a hospitalist who had completed residency training only in internal medicine. Dr. G—-  had no residency training whatsoever; he only completed one-year medical internship. Collectively Drs. G—-, A—-, and V—- had zero publication under the PubMed. Over the years, however, Drs. G—-  and A—- each had received huge sum of money, namely $320,000 and $325,000, respectively, from DOJ/DEA, testifying as Government witnesses or consultants.

 

Remarkably, none of them were even qualified to testify as a pain expert in civil medical malpractice suits in the State of Alabama, yet they were allowed by the court to testify as “pain .experts;’ at our criminal trial. Their unreliable and biased trial testimony played important roles in misguiding the jury to convict Dr. Couch and myself as “drug dealers” and “mobsters.”

 

On June 27, 2022, the Supreme Court, in a 9:0 decision, reversed and vacated the judgment of the appellate court. The Court emphasized that the Government needed to prove that I knowingly or intentionally acted as a “drug dealer,” i.e. with a guilty intention, when prescribing controlled medicines in treating my patients. The significance of this ruling was discussed in the following attached articles: “The Supreme Court Decision on Federal Prescribing Rules for Controlled Substances,” by Matt Lamkin, JD, MA, published in JAMA (October 3, 2022); and “US Supreme Court Delivers Much-Needed Certainty to Controlled Substances Prescribing,” by healthcare-and-policy law professors, Kelly K. Dineen Gillespie, Taleed EI-Sabawi, and Jennifer D. Oliva, published in “Health Affairs” (November 1, 2022) Candidly, I consider the Court’s ruling a victory of physicians’ autonomy in the practice of medicine.

 

Here the critical question is: What should we do in order to change our image of societal scapegoats? Professor Libby insightfully admonished: “Scapegoats do not see the broader aspects of their collapsed credibility. Societal scapegoats are deemed to have violated a co.de of conduct. A scapegoat is already found guilty before he is indicted or tried; there is a widespread societal consensus that what he did was wrong.” (p. 182)

 

Professor Libby suggested a step-by-step approach in solving this problem:

 

“The history of scapegoats is that once they have been stigmatized, they will continue to be abused by the government in the future. The only way to transform a scapegoat’s public image is to first recognize that they are political scapegoats. This has occurred twice. The first time was during the 1920s when doctors were made scapegoats in the first war on drugs. It happened again in 1990, when MEDICAL ASSOCIATIONS THROUGHOUT THE UNITED STATES SUPPORTED A CALL BY THE AMERICAN MEDICAL ASSOCIATION (AMA) for the resignation of the Inspector General of the HHS for his role in Medicare/Medicaid fraud inspection abuses targeting doctors.” (emphasis added)

 

“In both instances, the leadership of the AMA called a halt to the senseless destruction of doctors’ careers and lives in the government’s vain pursuit of drug addicts who were treated by physicians and to stamp-out medical fraud. The president of the AMA contacted the president of the United States in both cases and demanded an end to the witch hunts.” (p. 183)

 

Needless to say, it is time for all physicians of this country, irrespective of our differences in specialties and subspecialties, to stand together as one, in a collective effort, demanding the halt of government’s evil witch hunt — the rabid criminalization of physicians. Further, we need to come up with a due measure within out medical profession, to condemn and sanction those unethical and unqualified physicians such as G—-  and A—-, who shamelessly betrayed their fellow physician colleagues by becoming Government’s hired mouthpiece to say whatever the Government paid them to say to convict the accused physicians.

 

This is the reason I write to you and your organization. For your reference, I have also taken the liberty of including an earlier letter I wrote to the Alabama Board of Medical Examiners. From these writings, you may see that it was never one way or the other how the specialty of pain medicine was practiced. Rather, it was “Damned if You Do and Damned If You Don’t.” After all the criminal indictment was essentially a pretext or a special permit to allow the Government to seize physicians’ assets.

 

It is imperative that physicians of all specialties or subspecialties unite in protest against this scapegoating witch hunt so as to protect and preserve physicians’ collective reputation as well as the practice of medicine in this country. To fundamentally change the status of social scapegoats, we have to make every effort. Thank you very much for your attention to this matter.

 

Sincerely,

 

Xiulu Ruan, MD

 

Thailand drops a BOMBSHELL on Pfizer, they’re PI*SED | Redacted with Clayton Morris



When the first COVID-19 vaccines came around,  I listen, read, watched anything I could find about these so called vaccines that were being created at WARP SPEED.  I presumed that what the public was being told was just a fraction of what reality was. The first RED FLAG was that these vaccines were going to be put into distribution is less than ONE YEAR… when a new medication typically takes 10+ yrs of clinical trials. Over my FIVE DECADE career, I had seen more new meds pulled from the market after 1-2 yrs because of serious issues that did not appear in the clinical trials has always made me not totally comfortable with just released meds from clinical trials.

Second RED FLAG was that Congress passed a law that imdemified all the pharmas from any liability/collaterate damage from these vaccines.

Third RED FLAG was the vaccines were being released and administered with only a FDA EUA (Emergency Use Authorization) – which in” bureaucratic speak” is that hundred of millions of vaccinations administered  basically under a “unofficial clinical trial”

Fourth RED FLAG was that both of the mRNA temperature storage requirement was going to require up to a MINUS 94 F. that takes some special refrigeration equipment that is not necessarily cheap.

The J&J/Jansen version does not require a storage temp outside of what is normally found in your basic residential/commercial refrigerator/freezer and this version of the vaccine is based on the process that we have used to make our annual flu vaccines for years.  Because of all these factors, I chose for us to get the J&J/Jansen vaccination and booster and since only mRNA versions are now being offered as boosters… we are done with COVID-19 boosters.