Timely nutrition assessment and intervention in organ transplant recipients may improve outcomes surrounding transplantation. Because nutritional status is a potentially modifiable risk factor, the development of strategies designed to optimize nutritional status decreases the short-term risks in the post transplant period (Russo et al., 2010).
Despite many advances in surgical techniques, diagnostic approaches and immunosuppressive strategies, survival after heart transplantation is limited by the development of cardiac allograft vasculopathy (CAV) which is the most important cause of death late after transplantation (Hosenpud et al., 1998; Pethig et al., 1997) and by the adverse effects of immunosuppression.
Primary prevention of CAV in heart transplant (HT) recipients should include strict control of cardiovascular risk factors (hypertension, diabetes, hyperlipidemia, smoking and obesity), as well as strategies for the prevention of cytomegalovirus (CMV) infection (Costanzo et al., 2010). The sum of various risk factors has a negative impact on survival (Almenar et al., 2005). In addition, malnutrition increases the risk of infection post transplant and may reduce survival (Hasse, 2001).
It has been referenced that improper dietary habits, low physical activity and the side effects of immunosuppressive therapy might explain the weight gain, the altered lipid pattern and an increase in insulin resistance after transplant (Evangelista et al., 2005; Flattery et al., 2006; Udden et al., 2003). The metabolic picture after transplantation is also worsened by the immunosuppressive drugs (Anker et al., 1997; Kemna et al., 1994). Therefore, the development of strategies to reduce post transplant body weight and to improve insulin resistance is important (Senechal, 2005).
Immunosuppressive agents, such as prednisone, cyclosporine, mycophenylate mofetil and sirolimus, are associated with hyperlipidemia (Bilchick, 2004). The glucocorticoids play a major role in the development of post transplant osteoporosis (Epstein & Shane, 1996; Epstein et al., 1995; Glendenning et al., 1999; Reid et al., 1988; Rodino & Shane, 1998; Shane, 2000; Stempfle et al., 1999) and it is known that side effects of glucocorticoids, such as on weight gain and the increase of appetite (Kahn & Flier, 2000; Udden et al., 2003), may induce insulin resistance and diabetes mellitus.
Osteoporosis is a leading cause of morbidity; the most rapid bone loss occurs during the first three months and in the first two years after cardiac transplantation (Hasse, 2001; Henderson et al., 1995).
In addition, statins are usually required in transplant recipients to achieve low LDL cholesterol levels. High concentrations of these drugs, particularly in combination with cyclosporine, may increase the risk of side effects (Vorlat et al., 2003).
Recommendations for specific levels of nutrients should be made after the following factors are considered: nutritional status, body weight, age, gender, metabolic state, stage and type of organ failure, presence of infection, malabsorption or induced losses, goals and comorbid conditions (Hasse, 2001).
The effectiveness of nutritional education should consider a number of psychological variables, including social, cultural and ethnicity factors. These variables are present when HT recipients return to their usual context. Although not a priority for the survival of the person, they are important to the goal of the nutritional action.
Finally, HT recipients should be provided with a multidisciplinary team, including surgeons, cardiologist, nurses, psychologists and dieticians, among many others including ancillary services, such as home care nursing, cardiac rehabilitation, psychologic support, nutritional planning or patient support groups that can be used as resources in the follow-up of HT recipients (Costanzo et al., 2010).
A comprehensive nutritional assessment of transplant recipients should include a variety of parameters including physical assessment, history, anthropometric measurements and laboratory tests (Hasse, 2001). This method has been used in subjects who were followed for four years. During the first year, follow-up visits occurred once a month and they included evaluation of anthropometric measurements, body composition, biochemical parameters and dietary records; afterwards the body weight, the dietary habits, the physical activity level and the biochemical parameters were collected on the phone once a year for three years (Guida et al., 2009).
Various parameters are analyzed in the nutritional status assessment (Sirvent & Garrido, 2009).
The personal interview is essential to review and investigate certain aspects which may have direct bearing on the dietary pattern. Some key items to be considered are discussed in this paragraph.
In relation to food intake, it is necessary to find out the usual pattern (quantity, quality and distribution of food intake) in the last days, weeks or months. The assessment is directed at whether the intake meets minimum requirements of adequacy and variety. Related to food intake is the appetite, from which we can detect the presence of eating disorders and likely food intolerances, and delve into the personal history of diseases that alter appetite.
From a functional perspective it is necessary to inquire about ability to swallow and on digestive function and autonomy which enables food intake. It is also interesting to know more about chewing and swallowing patterns, and dental status. Reports on the presence of diarrhea, constipation, vomiting and intolerance in general should be included, and the degree of independence and autonomy of the person in relation to feeding should be checked.
Finally, we must-take into account all the circumstances which may influence and modify eating habits or energy expenditure, like family relationships, group memberships, special diets, type and frequency of physical activity, etc.
Most important items when assessing physical condition are hydration, weight aspect, awareness and autonomy. These parameters influence the ability to feed , body temperature, colour of skin and mucous and should be considered as merely indicative. This should be complemented by biochemical data, anthropometric and dietary history of the subject.
Provides information on the percentages of muscle, fat or bone. There are different methods: chemical techniques, electrical bioimpedance and anthropometry, among others. The anthropometric measurements and bioimpedance are widely used non invasive techniques.
Measurements of chemical techniques include creatinine and 3-methylhistidine. In relation to the first technique, the total plasma creatinine concentration is used for determining muscle mass, assuming that the creatinine is 98% in muscle tissue and 1 mg of creatinine equivalent to 0.88 kg of muscle. This technique only provides data on muscle mass, does not evaluate other parameters of body composition and has a number of drawbacks. The 3-methylhistidine is also used for determining muscle mass. Generally this shows the same disadvantages as the previous determination and we must add the complexity of the analysis and high cost.
Bioelectrical impedance analysis
Body composition can be determined by conventional bioelectrical impedance analysis (BIA). The subject should rest in the supine position and a weak current (electric current of low voltage high frequency and intensity) is passed between two electrodes placed on one hand and another on a foot. The intensity is conducted differently from fat (it acts as insulation) than fat-free mass, where water and electrolytes are good conductors. The use of this technique has several advantages, including: its relatively low price, ease of equipment transportation and its safety.
Because BIA equations must consider age, gender, race and body habitus of the patient (Dumler, 1997), accuracy of the results also depends on the equations used to determine body composition (de Fijter et al., 1997; Pichard et al., 1999).
Although BIA has become widely available, a single-frequency test may not be valid in transplant candidates due to body fluid shifts (Hasse, 2001).
The anthropometric method is highly recommended for several reasons: it is simple, accurate, accessible, comfortable and economical. The reliability depends on the ability of anthropometrics and rigor in making measurements. The protocol must be standardized so that results can be compared (Sirvent & Garrido, 2009).
Nutritional anthropometry is based on the study of a number of somatic measures on proportions of the human body. The data obtained from anthropometry (weight, height, perimeter, diameters, lengths and skin folds) are further processed by application of different regression equations and statistical formulas for information on body composition.
The parameters commonly used are weight, height and body mass index. The weight is an easily obtainable and reproducible indicator of body mass. Along with weight, the height provides less sensitive information on nutritional deficiencies. However, both parameters can be obtained on body mass index, also known as Quetelet index. In particular, BMI is defined by body weight in kilograms divided by the square of body height in metres.
Obesity is characterized by an excess of body fat. Several methods have been introduced to quantify obesity. The most common are measurement of body fat by the use of bioelectrical impedance techniques and measurements of body density by weighing subjects underwater and subsequent calculation of fat mass (Flier, 2001). As it is difficult to ascertain the exact amount of body fat, a number of markers have been development to quantify obesity.
BMI is the most widely used parameter for characterization of abnormalities of body weight (Kahn et al., 2006). The recommended classification for BMI adopted by the Expert Panel on the Identification, Evaluation and Treatment of Overweight and Obesity in Adults, and endorsed by the National Institute of Health and the WHO is: BMI<18.5 (underweight), 18.5 to 24.99 (normal weight), 25 to 29.99 (overweight), 30 to 34.99 (obesity class I), 35 t0 39.99 (obesity class II) and > 40 (obesity class III) (National Heart, Lung and Blood Institute, 1998).
BMI at the time of transplant is an important predictor of post transplant outcomes, including survival, perioperative morbidity, post transplant cardiovascular comorbities, long-term complications of transplantation and survival on the waiting list. HT recipients across a broad range of BMI (including normal, overweight and obesity I subgroups) achieved good long-term post transplant outcomes. However, recipients at the extremes (e.g. underweight and obesity II/III) have significantly higher morbidity and mortality compared with other groups. This diminished survival in the underweight (BMI<18,5) group resulted from excess morbidity in the first year post transplantation. However, with correction of their heart failure and subsequent reversal of their cachectis state, their risk of death along with the mean BMI, normalized after the initial post transplant period. Obesity II/III (BMI>35) was also associated with diminished survival, however, this appears to result from higher morbidity and mortality over the long-term (Russo et al., 2010).
In cardiac surgery patients, a low BMI increased the relative hazard for death and low S-album increased the risk for infection. Rapp-Kesek et al. suggest that these parameters provide useful information in the preoperative evaluation (Rapp-Kesek et al., 2004).
On the other hand, skin folds and body parameters are widely used (Sirvent & Garrido, 2009).
In this sense, the most common measures used are Triceps skin fold (PT), Circular circumference of the arm (CB) and Arm muscle circumference (AMC). PT allows us to assess fat mass. It is expressed in mm, and its measurement is made with a calliper at the midpoint between the acromion and the olecranon. On the other hand, CB provides information about the total mass and is measured with a tape measure midway between the acromion and the olecranon and expressed in cm. Last, AMC is used to assess muscle mass. It is expressed in cm and is calculated: CMB (cm) = CB (cm) -0.314 x PT (mm).
Biochemical parameters make possible obtaining more information on nutritional status. They are (among others):
• Albumin is one of the most used. This protein has a long half life (18-20 days), so their determination is a few sensitive markers in recent nutritional disorders. Instead it may be useful to assess long and serious situations of malnutrition. The normal concentration for adults is > 3.5 g/ dl. Hypoalbuminemia presents low risk values between 2.8 and 3.4 g/dl and hypoalbuminemia high-risk values are below 2.8 g/dl.
• Transferring is also widely used. It has a shorter half life between 8 and 10 days. Its main function is iron transport functions. It is more sensitive than albumin to nutritional changes and responds more quickly to changes in protein status. Normal levels are between 150 and 200 mg/ dl. It considers moderate deficiency between 100 and 150 mg/dl and severe deficiency below 100 mg/dl.
• Prealbumin is a transport protein with a half life of 2 days. It decreases rapidly when calorific intake or protein is low and responds very quickly to nutritional rehabilitation. Normal values are between 10 and 40 mg/dl.
• Retinol transport protein has a very short half life of approximately 12 hours. Its values are normal at 7.6 mg/ dl. Sensitivity to protein-energy deprivation is high, but lacks (as prealbumin) diagnostic specificity due to the impact of other processes on its levels.
• Fibronectin. Studies have shown low levels in situations of fasting and acute protein-energy malnutrition. Its normalization with nutritional rehabilitation is rapid. Normal values are around 169 mg/ml.
• Nitrogen balance provides information on the protein reserve. We reserve its use to situations where we want to have guidance on protein balance in a given time.
• Lymphocyte count, based on an adequate supply of energy and protein is essential for maintaining normal immune function. The drawbacks are its low specificity because it can be altered by multiple factors and the other limitation is that only the lymphocyte count is altered in situations when malnutrition is established.
Finally, we mention the existence of other parameters used in specific situations, such as vitamin A, vitamin E and so on. Measurement and interpretation of waist circumference and fasting triglycerides could be used among heart transplant patients for early identification of men characterized by the presence of elevated fasting insulin and apolipoprotein B concentration, and small LDL particles. The presence of the atherogenic metabolic triad identifies patients at high risk of coronary artery disease, even in the heart transplant population (Senechal et al., 2005).
Nutrition is extremely important in the care of patients undergoing transplantation (Helton, 2001). Following an appropriate and strict dietary regimen after the HT reduces risk factors and should be considered seriously (Guida et al., 2009).
During the acute post transplant phase, adequate nutrition is required to help prevent infection, promote wound healing, support metabolic demands, replenish lost stores and perhaps mediate the immune response (Hasse, 2001).
The HT recipient must have two main dietary goals: a healthy and balanced diet (to provide all the necessary nutrients and avoid new heart attacks) and to maintain strict hygiene measures to reduce germs in food (Casado, 2005).
With regard to the rules of food hygiene, there are a number of recommendations among which are: do not take raw food during the first six months post transplant (considered the highest risk); casseroles prepared at home, if not eaten immediately after cooking, should be kept in the fridge covered and consumed within 24 hours; do not add anything raw to the preparation; it is preferable to keep sauces separate (Casado, 2005).
A diet of cardiovascular protection should include the following considerations about certain parameters:
Some investigators have described excessive weight gain following cardiac transplant (Baker et al., 1992; Johnson et al., 2002; Keteyian et al., 1992; Lake et al., 1993). On average these patients gain approximately 10 kg within the first year after the procedure. This weight gain increases the risk of secondary diseases (i.e. hypertension, diabetes and dislipemias) (Williams, 2006).
Patients who were underweight or obese at one year post transplant were at greater risk of rejection over time than patients who were of normal weight or overweight. Post transplant cachexia and obesity are risk factors for poor clinical outcomes after heart transplantation. Grady et al. found that risk factors for increased body weight at one year after heart transplantation included both demographic factors (BMI at the time of transplant, younger age, black race) and clinical variables (etiology of heart disease, immunosuppression) (Grady, 2005).
After HT, regular weight-bearing and muscle-strengthening exercise should be encouraged to reduce the risk of falls and fractures, and to increase bone density. Lifestyle modifications, including weight loss, low-sodium diet and exercise are appropriate adjuncts to facilitate control of blood pressure in HT recipients (Costanzo et al., 2010). The clinicians may assist patients to make changes in their post transplant lifestyle to return to a normal body weight.
Guida et al. showed the efficacy of dietary intervention to obtain an early and late weight and metabolic control after HT. All subjects received a dietary plan that was elaborated to fit an energy intake > 25 Kcal/Kg/Ideal Body Weight/day, with 55% of carbohydrates, 15% of protein and 30% of total fat (fatty acids <10% of calories and dietary cholesterol <300 mg/ d according the American Heart Association step one diet guidelines) (The Expert Panel, 1988). The patients were prescribed low-salt food in order not to exceed sodium content of 1.5 g/ d and they were asked to limit the amount of additional salt to 3 g/ d. In this diet plan all subjects were encouraged to increase their physical activity level up to 30 min/ d three times a week and they were strongly recommended to modify some of their habits (Guida et al., 2009).
To reach this goal it is indispensable to carry out an early and comprehensive programme providing a plan of nutritional interventions, education and lifestyle counselling. They demonstrated that dietary compliance is helpful to get a good early and late control in weight and metabolic parameters, both in subjects enrolled during the first year from the transplant and after the first year from the transplant, even if high doses of immunosuppressive therapy are used. The beneficial effects of this intervention are still maintained after a 48-month follow-up period (Guida et al., 2009).
Incorporation of a weight loss plan, including diet, exercise and psychologic interventions into the discharge process with subsequent outpatient follow-up is recommended. The psychological interventions are often incorporated into a comprehensive weight loss programme (Grady, 2005).
The scientific literature shows different mood disorders in both the situation prior to transplantation and thereafter; the most common that have been tested are the anxiety disorders, depression and post traumatic stress disorder (Perez San Gregorio et al., 2005; Spaderna et al., 2007) and they have been found to adversely affect the ability to accept the new organ.
This negative emotion affects different areas of the daily life of transplant patients, including nutritional status, either with a decreased or increased appetite, and consequently a loss or weight gain, which is contraindicated in patients transplanted.
Although a variety of weight loss programmes exist, it may be beneficial to consider weight loss plans that are individualized and tailored to incorporate the patient’s clinical status, age, cultural background, dietary preferences, exercise history, socio-economic and behavioural factors etc. Many patients also have established patterns of cooking and eating based on cultural background (Grady, 2005).
Moreover, the goal of a dietary intervention should be to optimize the nutritional status and to preserve the long-term renal function by avoiding unnecessary protein loads (Al et al., 2005).
With respect to lipid metabolism, the increase in body weight is correlated with the increase in serum lipid level during the first year after the transplant (Grady et al., 1991; Keogh et al., 1988), whereas a decrease in body weight or energy intake is effective to reduce blood cholesterol level (Kannel et al., 1979; Nichols et al., 1976; Kromhout, 1983).
Harrison et al. defined malnutrition as midarm muscle circumference and triceps skin fold measurement <25th percentile (Harrison et al., 1997). The malnutrition diagnosed by subjective and objective nutrition assessment parameters is common in solid organ transplant recipients and leads to increased morbidity and mortality (Helton, 2001). Because malnutrition is a marker of worsening heart failure (Anker et al., 1997) and is a known risk factor for poor outcomes after surgery (Buzby et al., 1980; Rady et al., 1997) low BMI may in fact be a stronger predictor of poor outcomes than obesity. The BMI is potentially modifiable through medical management and/or lifestyle changes (Russo et al., 2010).
Patients with chronic heart failure are often malnourished as a result of maldigestion, malabsorption and poor nutrient assimilation. This is compounded by the fact that many of these patients have a prolonged catabolic state characterized by increased energy expenditure coupled with anorexia and inadequate dietary intake.
Malnourished patients undergoing transplantation have increased morbidity and mortality, and increased overall hospital charges. Patients surviving transplantation have altered lipid and fat metabolism, and problems with obesity and accelerated atherosclerosis leading to cardiovascular death (Helton, 2001).