skip to Main Content

Accueil Forums Forum général Est ce que les fous ont le droit de voter?

  • Ce sujet est vide.
Vous lisez 2 fils de discussion
  • Auteur
    Messages
    • #56098 Répondre
      Jeanmonnaie
      Invité

      Vrai question

      Comme les prisonniers d’ailleurs.

    • #56101 Répondre
      Emile Novis
      Invité

      Si les fous ont le droit de se présenter en croyant qu’ils vont « sauver la France » avec leur petite volonté de fer, je ne vois pas pourquoi nous n’aurions pas le droit de voter.

      • #56106 Répondre
        Jeanmonnaie
        Invité

        Sauver ta famille est normal. Pourquoi pas un pays ?
        D’ailleurs le marxiste va plus loin que les droitards puisque c’est un messianisme.
        Les fous sont des personnes vulnérables qui doivent être protégé.
        C’est comme les prisonniers ne ne comprends pas la logique de leurs droits de vote

        • #56113 Répondre
          aka deleatur
          Invité

          La question que tu poses est plus juridique que psychiatrique. La catégorie de « fou » n’est plus usitée en psychiatrie depuis au moins 40 ans, remplacée par celle de troubles mentaux, puis de handicaps ou de déficiences mentales, etc. Les soins psychiatriques vont aujourd’hui dans le sens d’une plus grande acquisition et affirmation de l’autonomie des individus et de leurs droits — aidés en cela par la pharmacologie qui a fait d’énormes progrès depuis 40 ans, lorsqu’elle est bien dosée, c’est-à-dire le fait de psychiatres correctement formés au dosage pharmacologique.
          Il n’y a aucune raison aujourd’hui pour refuser à un « fou » l’expression d’une opinion et le vote — pas plus qu’aux prisonniers quand ils ne sont pas privés de leurs droits civiques. La catégorie vraiment importante à prendre en compte est celle de consentement, de volonté. On doit faire comme si. Mais le plus souvent, cela fonctionne.

          • #56115 Répondre
            Jeanmonnaie
            Invité

            Un prisonnier peut se voir exclu de ses droits civiques, une pratique assez courante dans d’autres pays. On observe que chez les personnes atteintes de schizophrénie, la majorité refuse de suivre un traitement à long terme. Bien que cela soit moins problématique que pour les prisonniers, cela reste une question délicate. Même les personnes trisomiques ont le droit de voter en France.

            • #56124 Répondre
              aka deleatur
              Invité

              Oui, et les personnes trisomiques, tu le sais peut-être, vivent dans des degrés ou des paliers (ce ne sont pas les bons mots, je m’en excuse) plus ou moins intenses de la maladie. La plupart de celles que tu vois dans la rue (ou dans les films, ou dans les entreprises), sont dans des « paliers » peu intenses, sont socialisées, ont un travail, une vie amoureuse, sont capables de parler avec toi, sont conscientes de leurs situations, et sont, souvent, heureuses. Malheureusement, les cas les plus intenses de la maladie ont besoin d’une assistance médicale et de soins toute la journée. Il est difficile alors de parler de discernement. La plupart des psychotiques ont aussi un discernement, mais il peut être épisodiques.
              .
              Je ne crois pas vraiment que la comparaison fou-prisonnier soit tenable très longtemps. Sauf quand la détention s’accompagnent — et c’est souvent le cas — de troubles psychiatriques induits, ou surdéveloppés par les conditions de la détention. Saut à considérer, mais c’est une métaphore, une image, que la folie est une prison.

              • #56125 Répondre
                JeanMonnaie
                Invité

                Argument simple. Un trisomique à un âge mentale de 12 ans. Les enfants ne votent pas.
                Les prisonniers je ne vois aucun argument valable pour le justifier. Il est condamné et ne participe plus à la vie de la cité. C’est la même chose pour les expatriés qui ne devraient pas voter non plus.

        • #56117 Répondre
          Emile Novis
          Invité

          @JeanMonnaie
          Ma réponse était une boutade visant à interroger un peu la catégorie de « fou », très floue en vérité, et souvent utilisée n’importe comment. Par ailleurs, pour ce qui est des « responsables » politiques, on peut s’interroger sur leur personnalité. Pas certain qu’un Macron ou qu’un Zemmour puissent être considérés comme des personnes « équilibrées », par exemple. Mélenchon aussi, au fond.
          Je ne compte donc pas prendre part à cette discussion.

          • #56119 Répondre
            Emile Novis
            Invité

            Evidemment, la pointe ironique de ma réponse était à trouver dans le fait que « nous » est écrit en gras. Mais tu ne l’as même pas relevé, comme d’habitude. Ainsi tu as compris de travers, comme souvent.

            • #56123 Répondre
              JeanMonnaie
              Invité

              si je l’ai relevé ne t’inquiète pas… C’est d’ailleurs parce que je sais te lire que je retourne souvent tes arguments comme une crêpe.

          • #56120 Répondre
            JeanMonnaie
            Invité

            Un argument très simple pour mesurer la folie d’une personne serait l’AAH . Si on considère qu’une personne atteinte de troubles mentaux peut toucher cette allocation, c’est qu’elle souffre au quotidien de son handicap, donc de sa folie. À partir de là, on peut considérer que l’interdiction du droit de vote est justifiée. Il faut être un peu fou pour se présenter à une élection présidentielle. Ce n’est pas très important.

            • #56127 Répondre
              aka deleatur
              Invité

              Encore une fois, il y a des variations d’un individu à l’autre, on ne peut pas généraliser. Comme il y a des gens au RSA qui ont un patrimoine.
              La santé mentale n’a jamais préjugé de la qualité du vote. Le gens sains d’esprit et de corps peuvent tout à fait voter comme des cons pour des cons, de quelque bord qu’on se place. Ou se persuader que c’est important et utile de le faire, ce qui pourrait très bien être considéré comme une folie – en la matière, les normes sociales sont autant à prendre en compte que les déterminants psychiatriques.

              • #56128 Répondre
                JeanMonnaie
                Invité

                L’âge mental d’une personne atteinte de trisomie 21 se situe généralement entre 6 et 12 ans. Mon argument sur les trisomiques porte sur leur âge mental. Ce n’est pas parce qu’une personne saine d’esprit peut mal voter que cela répond à l’argument. De la même manière, comme je l’ai dit à Emile Novis, si une personne touche l’AAH, c’est qu’on considère que sa maladie mentale la handicape au quotidien, donc elle n’a pas l’équilibre nécessaire pour voter.

                • #56139 Répondre
                  deleatur
                  Invité

                  JeanMonnaie: « De la même manière, comme je l’ai dit à Emile Novis, si une personne touche l’AAH, c’est qu’on considère que sa maladie mentale la handicape au quotidien, donc elle n’a pas l’équilibre nécessaire pour voter. »
                  .
                  Tu ne sais rien de rien sur toutes ces histoires donc tu ferais mieux de fermer ta gueule et d’essayer de ne pas oublier que tu n’es qu’un con qui se bute aux faits divers sordides pour entretenir son délire sur ce monde.

                  • #56140 Répondre
                    Jeanmonnaie
                    Invité

                    Ok le fou

                    • #56143 Répondre
                      deleatur
                      Invité

                      Tu m’as pris pour un teubé incapable de reconnaître la petite icone de ton pseudo? T’es vraiment un teubé.

                      • #56153 Répondre
                        Jeanmonnaie
                        Invité

                        Pour info. On a pris mon pseudo pour le dernier message mais j’aurais dit la même chose..

                      • #56154 Répondre
                        deleatur
                        Invité

                        Je sais bien que t’aurais dit la même chose, c’est quand même par là que ça passe de me donner raison quand je fais remarquer que tu n’es qu’un con qui la ramène sans savoir.

                      • #56155 Répondre
                        Jeanmonnaie
                        Invité

                        Je sais qu’une personne en souffrance psychologique n’est pas apte pour voter.
                        Ton numéro du moment le confirme parfaitement.

                      • #56156 Répondre
                        deleatur
                        Invité

                        JeanMonnaie: Quoi, JeanMonnaie est du côté des gens nés avant la honte? Comme c’est surprenant, qui aurait pu se douter d’une chose pareille!

                      • #56157 Répondre
                        Jeanmonnaie
                        Invité

                        La honte est de ton côté pour le coup.
                        Tu arrives à te faire détester par ton propre camps idéologique.
                        Tes amis ne pouvaient plus te voir en peinture.
                        Et ici c’est la même.
                        Je te conseille de te faire aider.

                      • #56158 Répondre
                        Jeanmonnaie
                        Invité

                        Excuse toi.
                        Promet de voir un psy pour un traitement.
                        Tout le monde ici pardonneras ta folie passagère.

                      • #56159 Répondre
                        deleatur
                        Invité

                        JeanMonnaie: Mais tu crois quoi? Que je vais me soucier de l’avis d’un raté qui jure sur la vie de ses gamins qu’il va partir pour nous raconter derrière que c’est de notre faute s’il est obligé de rester?
                        .
                        Regarde toi en face un peu et tu comprendras aisément que je me fous pas mal de l’avis d’un guignol de ton genre sur ma santé mentale.

                      • #56160 Répondre
                        Jeanmonnaie
                        Invité

                        Pour une fois ce que dis sur toi fait consensus dans ce forum.
                        Du coup ?
                        Tu appelles le médecin ?

                      • #56161 Répondre
                        deleatur
                        Invité

                        JeanMonnaie: Pourquoi c’est toujours de la faute des autres quand tu ne sais pas te tenir? Pourquoi t’es incapable de tenir responsable de tes actes? Pourquoi t’es juste bon à prétendre que tu te tires pour te cramponner à ce forum à coups d’argument fallacieux?
                        .
                        Parce que t’es un déséquilibré.

                      • #56162 Répondre
                        Jeanmonnaie
                        Invité

                        Tu es le sujet du jour.
                        C’est toi qui fait le fou.
                        Du coup, pourquoi tu refuses de te faire aider ?

                      • #56163 Répondre
                        deleatur
                        Invité

                        JeanMonnaie: Pourquoi tu refuses de tenir compte du fait que les psys que j’ai pu croiser considèrent que j’ai drôlement bien compris mes problèmes?

                      • #56165 Répondre
                        Jeanmonnaie
                        Invité

                        Pas assez pour les résoudre.
                        C’est tout le problème.

                      • #56168 Répondre
                        deleatur
                        Invité

                        JeanMonnaie: Donc tu ne sais rien de moi mais t’en sais quand même plus que les psys qui ont pu croiser ma route?
                        .
                        T’es complètement ravagé. Et ça n’a rien d’étonnant venant d’un type qui passe son temps à essayer de nous convaincre qu’on est dans l’erreur pour essayer de se faire oublier qu’il n’a même pas les moyens d’assumer ses gamins.

                      • #56170 Répondre
                        Jeanmonnaie
                        Invité

                        Je sais que tu es fous puisque tu le montres.
                        Du coup pourquoi tu ne te soignes pas ?

                      • #56173 Répondre
                        deleatur
                        Invité

                        JeanMonnaie: A quel moment je devrais me fier à l’avis d’un type qui n’a aucune parole, a quel moment je devrais me soucier de l’avis d’un con qui ne sait rien de toutes ces histoires mais qui croit pouvoir me faire la leçon du haut de son ignorance?
                        .
                        Je vois pas. Moi tout ce que je vois c’est que si tu passais moins de temps à essayer de nous convaincre qu’on est dans l’erreur et plus de temps à t’occuper de toi et de tes problèmes, tu aurais peut être les moyens de faire un père digne de ce nom. Par conséquent va t’occuper de tes gosses au lieu de me renifler le cul.

                      • #56175 Répondre
                        JeanMonnaie
                        Invité

                        Tu es fou pour la majorité du forum enfaite.
                        Tu te doutes bien qu’une personne saine d’esprit ne fait pas ce que tu fais.
                        Sans doute que ton caractère de merde participe largement à ton attitude mais quand même.
                        Après continues à ne pas te soigner et faire le dingo. Cela ne change rien à ma vie.

                      • #56176 Répondre
                        Le psychanalyste
                        Invité

                        Je confirme il est fou

                      • #56180 Répondre
                        deleatur
                        Invité

                        JeanMonnaie: Bah alors, ils sont où tes gosses mon grand? Pourquoi t’es pas avec à profiter des vacances? Pourquoi t’es ici avec moi à essayer de me convaincre que les psys sont dans l’erreur et que du haut de ton RSA t’as les moyens de me faire la leçon?

                      • #56190 Répondre
                        JeanMonnaie
                        Invité

                        Je doute que le psy te t’a dit que tu allais bien et que n’avais pas à te soigner.

                      • #56192 Répondre
                        deleatur
                        Invité

                        JeanMonnaie: Tes gosses ils sont où mon grand? Pourquoi t’es pas avec eux? Pourquoi t’es sur le net à essayer de me convaincre que je suis taré? Pourquoi t’es pas foutu de te dire que tu as mieux à faire de ta vie que de la gâcher ici à perdre ton temps à essayer de me convaincre que je vais mal et que j’ai grand besoin d’aide?
                        .
                        T’es capable de répondre à ces questions? Non, t’es juste bon à les balayer d’un revers de main car ça t’obsède de ne pas réussir à me faire plier sous le poids de ton délire à mon sujet.

                      • #56196 Répondre
                        JeanMonnaie
                        Invité

                        Une personne qui fait ce que tu fais va mal. Tu te doutes bien que ton comportement est anormal.
                        Chaud le déni.

                        Sinon je te laisse. Continue à te rouler par terre et faire le foufou.

                      • #56197 Répondre
                        deleatur
                        Invité

                        JeanMonnaie: Ouais, mon comportement est anormal. Normalement on fait comme toi, on se laisse écraser par la pression sociale, on se conforme aux avis des abrutis incapables d’y voir plus loin que leurs préjugés foireux, on se donne pas le droit d’en foutre partout pour défendre l’existence d’une limite. Mais mon grand, ça me va très bien d’être anormal puisqu’à l’arrivée je suis légitime à prétendre avoir fait de la psychanalyse une science qui se tient là où la seule chose que tu peux faire c’est te ridiculiser en essayant de me faire croire que ton avis vaut mieux que l’avis des médecins.

                      • #56200 Répondre
                        Kuri
                        Invité

                        Bonjour, on peut savoir où se trouve votre apport majeur à la psychanalyse ? Existe-t-il un recueil d’articles que vous aurez rédigés sur le sujet ou n’existent-ils que dans votre cerveau malade ?

                      • #56202 Répondre
                        deleatur
                        Invité

                        Kuri: Ca n’existe que dans mon cerveau malade. Et ça n’existe que dans mon cerveau malade parce que ça leur fout drôlement la haine aux fans de Lacan de faire face à ce que j’ai à raconter. Tu veux faire le malin parce que se prendre des coups dans la gueule en continue ça ne facilite pas les choses quand il serait question de tout coucher proprement sur papier? Fais toi plaisir, c’est pas comme si quelqu’un qui m’abordait de cette façon pouvait avoir quoi que ce soit de conséquent à dire sur cette histoire.

                      • #56203 Répondre
                        Ostros
                        Invité

                        Tu as dormi depuis 16h hier ?
                        J’ai pas osé appeler les pompiers parce que je crois me souvenir avoir entendu qu’en cas de decompensation ils appellent les flics…
                        Tu m’inquiète on dirait que tu n’as même pas pris de pause pour manger…

                      • #56215 Répondre
                        deleatur
                        Invité

                        Ostros: Tu veux le numéro de ma mère pour lui faire part de tes angoisses à deux balles?

                      • #56218 Répondre
                        Kuri
                        Invité

                        Tout le monde a le numéro de ta mère

                      • #56220 Répondre
                        deleatur
                        Invité

                        Kuri: Ok le fromage blanc.

                      • #56217 Répondre
                        deleatur
                        Invité

                        Ostros: Sinon j’ai le droit de faire remarquer que si on fait l’hypothèse que t’as raison, que j’ai coulé une bielle, c’est à deleatur qui se vantait de trouver ça rigolo de me faire perdre les pédales qu’il faudrait que t’ailles renifler le cul ou alors c’est trop compliqué pour toi de regarder le problème en face?
                        .
                        T’es complètement conne hein comme meuf. Tu me brises les couilles avec ta bonne conscience de conne et au lieu de te remettre en question quand tu vois à quel point tu me gonfles, tu te crois autorisée à insister au lieu de fermer ta gueule donc tu veux que je te dise quoi? C’est toi qu’est tarée, c’est toi qui ne comprend pas qu’entre ton délire à mon sujet et le réel du type de l’autre côté de ton écran il y a un truc qui s’appelle la réalité. Par conséquent tu prends sur toi, tu me fous la paix, tu te contentes de t’occuper de ton cul et de faire grâce à ma mère d’être foutue de s’occuper de moi si un jour j’ai besoin qu’on appelle les pompiers parce que la folie a fini par m’emporter.
                        .
                        C’est de tes bonnes intentions que mon calvaire est fait donc tu lâches l’affaire comme ça je pourrais me calmer et je pourrais me reposer. Pauvre pute va.

                      • #56219 Répondre
                        Ostros
                        Invité

                        Je veux bien le numéro de ta mère, au cas où.
                        Et je ne suis plus pauvre. Merci.

                      • #56221 Répondre
                        deleatur
                        Invité

                        Ostros: Bah vas y ma grande, fait péter une adresse mail.

                      • #56222 Répondre
                        Ostros
                        Invité
                      • #56223 Répondre
                        deleatur
                        Invité

                        Ou mieux, tu fais péter ton numéro. Comme ça ma mère t’appelles vers 21 heures et puisque ça te tient tellement à coeur, tu feras en sorte d’être là.

                      • #56224 Répondre
                        Ostros
                        Invité

                        Envoie moi son mail ou donne lui le mien, sinon.

                      • #56225 Répondre
                        deleatur
                        Invité

                        Ostros: Tu fais péter ton numéro et on t’appelle à 21 heures. Si ça te va pas là tu lâches l’affaire ou alors tu comptes sur moi pour demander à ma mère de me présenter à son avocate afin que je lui parle de toi et de ton comportement.

                      • #56226 Répondre
                        deleatur
                        Invité

                        Cher Bertrand delapelle, non, faut pas déconner non plus. Vu ton état en ce moment je vais pas te filer mon perso.

                        Je suis sincèrement inquiète vu le nombre de thread lancés ou exhumés à la suite mais puisque ça t’emmerde je vais arrêter d’en parler.

                        Si tu te sens de me filer un mail d’un proche au cas où, tant mieux. Si non, tant pis.

                        Passe une bonne soirée et essaie de te reposer les nerfs.

                        Ostros.
                        .
                        Mais tu crois quoi? Que je ne serais pas là quand ma mère va te contacter? Tu t’imagines qu’elle refusera de me filer ton numéro de téléphone au besoin pour aller voir les flics?

                      • #56227 Répondre
                        Ostros
                        Invité

                        « ou alors tu comptes sur moi pour demander à ma mère de me présenter à son avocate afin que je lui parle de toi et de ton comportement. »
                        Comment tu veux que je ne sois pas inquiète au vu de cette phrase hallucinante ? Enfin bref j’avais dit que j’arretais donc on arrête là.

                      • #56229 Répondre
                        deleatur
                        Invité

                        Ostros: Qu’est ce que ça a d’hallucinant de faire appel à un avocat pour t’obliger à rester à ta place? Qu’est ce que ça a de délirant de croire que ça doit être possible de te coller une plainte au cul pour que t’arrêtes tes conneries?
                        .
                        Ah mais je sais, je suis un psychotique qui a décompensé donc ce qui serait raisonnable c’est que j’oublie l’avocat et que je me mette en tête de te buter. C’est ça?

                      • #56230 Répondre
                        Ostros
                        Invité

                        Ce qui n’est pas logique dans ton raisonnement c’est que tu omets que depuis hier après-midi tu postes non stop sur un forum. Et que ma demande n’est en rien un acte qui peut être considéré comme du délit.
                        Ta façon de restituer ma situation est inexacte.

                      • #56231 Répondre
                        Ostros
                        Invité

                        La situation*

                      • #56232 Répondre
                        Ostros
                        Invité

                        « donc ce qui serait raisonnable c’est que j’oublie l’avocat et que je me mette en tête de te buter. C’est ça? »
                        Même ça ça n’a pas de sens.

                      • #56245 Répondre
                        deleatur
                        Invité

                        Tout ça pour te défiler? C’est fabuleux.

                      • #56233 Répondre
                        deleatur
                        Invité

                        Ostros: Tu voulais parler à ma mère, je l’ai prévenue, je lui ai expliqué de quoi il en retournait, donc là elle s’inquiète de savoir si on doit s’attendre à voir débarquer les flics et les pompiers durant le week end.
                        .
                        T’as voulu cette discussion donc tu vas te donner de la peine de la rassurer de vive voix.

                      • #56234 Répondre
                        Kuri
                        Invité

                        Ahahahahahahahahahah quel sketch je meurs

                      • #56228 Répondre
                        Kuri
                        Invité

                        Mdrrrr tes menaces bidons, reste à la psychanalyse pcq tu connais rien au droit simplet

                      • #56247 Répondre
                        Kuri
                        Invité

                        Rien à répondre à ça hein simplet
                        Me semble que tu es aussi calé en droit qu’en psychanalyse

                      • #56249 Répondre
                        La psychiatrie
                        Invité

                        Il a peur de toi se gro boufon, il ce croit tou permi avec ostros parce que c’est une fille.

                      • #56254 Répondre
                        Tristan
                        Invité

                        Je retiens ceci : « C’est de tes bonnes intentions que mon calvaire est fait ». Je note.

                      • #56268 Répondre
                        Demi Habile
                        Invité

                        Ouais puis t’essayes de faire ouvrir les yeux aux autres car on a pas fini de voir ce forum pourrir sous les assauts du schizophrène.

      • #56109 Répondre
        deleatur
        Invité

        Ian the Monte Carlo algorithm, we would like to perform global transformations of
        the gauge field. First, because local transformations would also require the computation
        of the full determinant (which is a non-local quantity) and secondly to reduce autocorrelations. However, after a global transformation, the corresponding value of the action
        can change a lot and the new gauge configuration is unlikely to contribute significantly
        to the action. Therefore, it would require very small steps in the update algorithm leading to high autocorrelation and a large number of updating. The HMC algorithm [35],
        presented in this section, solves these problems. It allows for global transformations
        while maintaining a good efficiency.
        As explained in the previous section, the update algorithm used to generate the Markov
        chain is defined by its transition probability P. This probability should satisfy the
        ergodicity and the detailed balance conditions. In our case, the probability is written
        P = PEPA where:
        — PE is the probability to generate {U}n+1 from {U}n during the update process.
        It will depend on the details of the algorithm.
        — PA is the acceptance probability to decide whether or not the new gauge configuration is kept. It is chosen such that the detailed balance property is satisfied.
        Now, the idea is to interpret the action (2.8) as a potential, associated to a fictitious
        Hamiltonian, and to add a new set of momenta fields Π which play the role of conjugate
        variables associated to the gauge field Uµ. Gauge links Uµ(x) are SU(3) group elements,
        so we have one su(3) Lie algebra element Πµ(x) per site x and per direction µ. Since
        2.2 Monte Carlo simulations 31
        the action S = SG + SF does not depend on the momenta, they can be factorized out
        and do not change the physical results:
        hOi =
        1
        Ze
        X
        x∈Λ,µ
        Z
        D[Πµ]D[Uµ]D[φ]D[φ

        ] hO[Uµ]iF e
        −(SG+SPF+
        P 1
        2Π2
        )
        =
        1
        Z
        X
        x∈Λ,µ
        Z
        D[Uµ]D[φ]D[φ

        ] hO[Uµ]iF e
        −(SG+SPF)
        ,
        where Ze is defined as in (2.1) but now with the total action including the momenta Πµ.
        Then, the total action, including the pseudofermion and momenta fields is:
        SHMC =
        1
        2
        Π
        2 + SQCD(U) = 1
        2
        Π
        2 + SG(U) + φ

        D

        (U)D(U)
        −1
        φ . (2.12)
        This action describes the evolution of a classical system in a 4-dimensional space. The
        associated time is not related to the physical time but rather to the computer time which
        labels the gauge configurations. This is called Molecular Dynamics (MD). Quantum
        fluctuations of the quantum field in 4 dimensions are described by the trajectory of a
        classical system in a 5 dimensional space-time. The Hamilton-Jacobi equations for this
        classical system are



        U˙ =
        δSHMC
        δΠ
        ,
        Π =˙ −
        δSHMC
        δU = −
        δSG
        δU − φ

        (M†M)
        −1 δM†
        δU (M†
        )
        −1 + M−1 δD
        δU (M†M)
        −1

        φ
        (2.13)
        where the right hand side of the second equation is called the force term and its exact
        expression depends on the lattice action used in the simulation. The first equation is
        numerically easy to solve but the second one is much more difficult since it requires the
        evaluation of the inverse Dirac matrix. Finally, the acceptance probability PA is chosen
        to be
        PA({U, Π} → {U, Π}
        0
        ) = min
        1 , e−S(U0
        ,Π0
        )+S(U,Π)
        , (2.14)
        so that, the total probability P of the Markov process is
        P({U} → {U}
        0
        ) = Z
        D[Π]D[Π0
        ] PM[Π] · PE({U, Π} → {U, Π}
        0
        ) · PA({U, Π} → {U, Π}
        0
        )
        (2.15)
        where PM ∼ exp(−
        1
        2
        PΠ2
        ) is a gaussian distribution. One can prove that this probability P satisfies the detailed balance condition if we also impose that the evolution
        equations are reversible and area preserving. In the continuum theory, this is always
        true thanks to Liouville’s theorem but not necessarily with integration algorithms where
        a discrete step size is used. A typical example of algorithm used in simulations is the
        LeapFrog algorithm.
        During the molecular dynamics, the system lies on a hyper surface of constant energy
        and explores only a subspace of the full phase space (Π, U). Nevertheless, during this
        step, the dynamics can produce gauge configurations with very different values for the
        action SQCD(U) associated to the QCD action only. The heat bath step, at the beginning
        of each MD trajectory, refreshes randomly the momenta of the system and then ensures
        ergodicity
        32 CHAPTER 2. Computation of observables in lattice QCD
        An interesting property of this algorithm is that, since the action is a constant
        of motion during the molecular dynamic, the acceptance rate is theoretically equal to
        one. But, because of numerical rounding errors during the Leapfrog integration, the
        acceptance rate is not exactly one but is still very high (errors are of order O(
        2
        ) for a
        first order integrator and the integration step is usually chosen such that PA ≈ 80%).
        Summary :
        The use of an heat-bath algorithm and a molecular dynamics are at the origin of the
        name Hybrid Monte Carlo. The algorithm can finally be summarized as follow:
        — At the beginning of each step of the MC, the momenta associated with the fermion
        fields are generated randomly according to a gaussian distribution via an heat-bath
        algorithm. Pseudofermion fields are generated in two steps: first, a random field χ
        is generated according to a gaussian distribution and secondly, the pseudofermions
        are obtained via φ = Dχ.
        — Then, the gauge fields and momenta are updated using the molecular dynamics
        evolution eq. (2.13). During this step, the pseudofermion fields are kept constant.
        — At the end, the new gauge configuration is accepted with a probability PA given by
        eq. (2.14), this step corrects for the numerical errors introduced by the Leapfrog
        algorithm. If the configuration is rejected, we restart from the previous state which
        is included again in the Markov chain.
        2.3 The quark propagator
        2.3.1 Definition
        Once gauge configurations are generated, the next step is to evaluate the quark
        propagator appearing in Wick contractions in eq. (2.4). In lattice QCD, the Dirac
        operator, for a given flavour, is written Dab
        αβ(y, x) where (a, α, y) and (b, β, x) are
        respectively the color, spinor and space-time indices associated to the sink and to the
        source. The size of the matrix is then 12N × 12N where N is the total number of sites
        of the lattice. Finally, the propagator, G, is defined as the inverse of the Dirac operator:
        X
        y∈Λ
        D
        ab
        αβ(x, y)G
        bc
        βγ(y, z) = δ(x, z)δacδαγ , (2.16)
        and depends on the lattice action used for the simulation. Since the Dirac operator
        only involve neighboring points of the lattice, the matrix is sparse and algorithms based
        on conjugate gradient methods are particularly well suited. Nevertheless, the exact
        all-to-all inversion, i.e., the solution from each source point to each sink point of the
        lattice, is impossible with present day computational capabilities (it would requires
        12N ∼ 108
        inversions for typical lattices). The problem can be simplified by considering
        the following equation (spinor and color indices are omitted for simplicity):
        D(x, y)ψ(y) = δ(x) , ψ(y) = G(y, x)δ(x) , (2.17)
        where the solution vector, ψ(y), corresponds to the one-to-all solution for a point source
        placed at the origin δ(x). It would correspond to one row of the full propagator matrix
        and requires 12 inversions per lattice site. Moreover, the backward propagator can be
        2.3 The quark propagator 33
        obtained from the forward propagator by using the γ5-hermiticity relation G(y, x) =
        γ5G(x, y)
        †γ5.
        A drawback of this method is that only a small part of the gauge information is used
        since we don’t exploit the full translational invariance of the propagator (the source is
        fixed). Since generating gauge configurations is extremely costly, it would be preferable
        to exploit them to reduce the gauge noise. Moreover, point-to-all propagators are not
        suited when using non-local interpolating fields.
        2.3.2 All-to-all propagators
        Solutions exist to evaluate all-to-all propagators and are based on stochastic methods
        [36]. The idea is to use, for each gauge configuration, an ensemble of Ns stochastic
        sources satisfying
        lim
        Ns→∞
        1
        Ns
        X
        Ns
        s=1
        η
        a
        α
        (x)s

        η
        b
        β
        (y)s

        = δαβδ
        abδx,y , (2.18)
        where each component is normalized to one, η
        a
        α
        (x)

        [r]
        η
        a
        α
        (x)[r] = 1 (no summation). This
        can be implemented using random gaussian numbers on each site of the lattice, for each
        color and spinor index. Then the Dirac operator is inverted for each source:
        D(x, y)ψ(y)s = η(x)s , Dab
        αβ(x, y) ψ
        b
        β
        (y)s = η
        a
        α
        (x)s ,
        where ψ
        a
        α
        (x)s is the solution vector of size 12N. An unbiased estimator of the propagator
        is then given by contracting the solution vector with the corresponding source:
        ψ
        a
        α
        (x)s = G
        ab
        αβ(x, y) η
        b
        β
        (y)s ⇒ G
        ab
        αβ(x, y) = 1
        Ns
        X
        Ns
        s=1
        ψ
        a
        α
        (x)s η
        b
        β
        (y)

        s
        . (2.19)
        Of course, the number of stochastic sources is always finite and, since the inversion of the
        Dirac operator is often the most demanding part of the algorithm, it can be quite limited.
        Then, the condition (2.18) is only approximately fulfilled and the quark propagator
        obtained by using eq. (2.19) can be very noisy. Indeed, it requires the cancellation of
        the U(1) noise on the whole lattice whereas the signal decreases exponentially with the
        space-time separation. Therefore, even if some terms should cancel in average, they can
        contribute significantly to the variance. An extremely useful tool to reduce the noise is
        time dilution [36].
        2.3.3 Time dilution
        In general, dilution consists in splitting the source η into several secondary (diluted)
        sources with vanishing overlap. For example, in time dilution, a secondary source is
        defined on a single time slice and equal to zero everywhere else. The advantage is that
        the condition (2.18) is automatically fulfilled for tx 6= ty. Since the time dependence of
        the quark propagator is known to be large, this leads to a significant variance reduction
        η(~x, t) = X
        τ
        η(~x, t)[τ]
        , η(~x, t)[τ] = 0 unless t = τ .
        34 CHAPTER 2. Computation of observables in lattice QCD
        The Dirac operator is now inverted on each diluted source and the full propagator is
        recovered by summing over all secondary sources:
        G
        ab
        αβ(x, y) = 1

        X
        τ
        ψ
        a
        α
        (x)[τ] η
        b
        β
        (y)

        [τ]
        ,
        where, for full-time dilution, Nτ = Ns × T. For example, as shown in ref. [36], on a
        323 × 64 lattice, the variance will be smaller when using one complete source fully timediluted rather than 64 sources without dilution. Finally, dilution could also be applied
        to spinor or color indices. The limit where dilution is applied to all space-time, color and
        Dirac indices would correspond to the computation of the exact all-to-all propagator.
        2.3.4 Numerical implementation
        In this work, we used the dfl_sap_gcr inverter from the DD-HMC package [37, 38].
        It is based on a conjugate gradient algorithm with Schwarz-preconditioning [39] and low
        mode deflation [40, 41] which significantly reduces the increase in computational cost as
        the quark mass is lowered.
        Krylov Subspace Iteration Methods
        The algorithm to compute the quark propagator is based on a conjugate gradient
        algorithm. This kind of algorithms (Krylov Subspace Iteration Methods) are well suited
        for large and sparse matrices like the Dirac operator.
        Spectral decomposition
        The low modes of the Dirac operator lead to numerical difficulties when the quark
        mass is lowered. The idea is to compute exactly the low modes (< N0) of the operator
        and to treat them separately using the decomposition
        D
        −1
        (x, y) = X
        N0
        i=1
        1
        λi
        v
        (i)
        (x) ⊗ v
        (j)
        (y)
        † + De−1
        (x, y),
        where (v
        (i)
        , λi) are respectively the eigenvectors and eigenvalues. The remaining part of
        the Dirac operator, De−1
        (x, y), is then better conditioned (since low modes have been
        suppressed) and easier to invert numerically. The problem comes from the fact that the
        eigenvalue density increases with the volume making the exact evaluation of the low
        lying eigenvalues impossible for large lattices. However, as shown in ref. [40], only a
        small number of the low lying modes needs to be solved exactly to capture the essential
        physics, such that the method can also be used for large volume.
        Standard optimizations
        Since the square of the Dirac operator only involves even or odd sites separately, one
        can use the so called even-odd preconditioning. It significantly reduces the condition
        number of the Dirac operator and leads to an acceleration of the solver. It also reduces
        the memory space needed to store the fermionic fields.
        2.4 Correlators 35
        2.4 Correlators
        In lattice QCD simulations, we are often interested in the special case of two- or
        three-point correlation functions. In this section I will explain in more details how
        the two-point correlation functions can be computed and an example of three-point
        correlation function will be given in Chapter 4. We will see that two-point correlation
        functions are useful to extract the energy levels of mesons or some simple matrix elements
        like decay constants.
        2.4.1 Interpolating operator
        An interpolating operator, O, associated to a bound state M, is an operator with
        a non-zero overlap with the state of interest. In particular, it must carry the same
        quantum numbers like parity, spin or flavour numbers. Then, for a scalar field, we have
        h0|O(x)|Mi =

        Ze−iP·x
        ,
        where √
        Z = h0|O| ˆ Mi is the overlap factor associated with the interpolating operator.
        Similarly, for a vector field
        h0|Oµ(x)|M(µ)i = µ

        Ze−iP·x
        ,
        where µ is the polarization of the field. In practice, the interpolating field couples to
        every particles with the same quantum numbers and different choices are possible. They
        lead to different overlap factors Z and couple differently with the excited states.
        The simplest interpolating operator can be constructed from one of the 16 linearly
        independent combinations of gamma matrices (denoted by Γ) such that it has the correct
        quantum numbers (see Table 2.1):
        O(x) = ψ1
        (x)Γψ2(x), (2.20)
        where ψ1 and ψ2 may correspond to different flavours. Generally, an interpolating
        operator for a particle with spatial momentum ~q is given by
        O~q(t) = 1
        V
        X
        ~x
        e
        −i~q·~x O(~x, t).
        In particular, to compute the mass of a meson, it is convenient to work at vanishing
        momentum, so we sum over all spatial lattice points. Finally, defining Γ = γ0Γ
        †γ0, we
        have
        O

        (x) = ψ2
        (x)Γψ1(x), (2.21)
        and the meson two-point correlation function at vanishing momentum is
        C(t) = hO(t)O

        (0)i =
        X
        ~x,~y,t0
        hO(~x, t0 + t)O

        (~y, t0
        )i,
        where I have used the translational invariance.
        36 CHAPTER 2. Computation of observables in lattice QCD
        J
        P C Γ
        Scalar 0
        ++ 1
        0
        +− γ0
        Pseudoscalar 0
        −+ γ5
        γ0γ5
        Vector 1
        −− γi
        γ0γi
        Axial 1
        ++ γ5γi
        Tensor 1
        +− γiγj
        Table 2.1 – Quantum numbers associated to some local interpolating operators of the
        form O(x) = ψ(x)Γψ(x)
        2.4.2 Asymptotic behavior
        In this section, I use the notation Oˆ for the time independent operator in the
        Schrödinger picture and O(t) for the time dependent operator in the Heisenberg picture.
        Then, using the spectral decomposition
        1 =
        X
        n
        Z
        d
        3pn
        (2π)
        32En
        |MnihMn| ,
        the two-point correlation function becomes
        C(t) = hO(t)O

        (0)i =
        X∞
        n=1
        1
        2En
        h0|O| ˆ MnihMn|Oˆ†
        |0ie
        −Ent
        , (2.22)
        where En is the energy of the n
        th state of the Hamiltonian and where I used the relativistic normalization of states hMn|Mmi = (2En)δnm. In general, due to periodic boundary
        conditions, the particles can also travel in the other direction. But, in this work, I will
        mostly study heavy-light mesons where the heavy quark propagates only forward in time
        (see Section 3.1.6), so I neglect these terms here. In particular, if we note M = M1 the
        ground state, then, at sufficiently large time, the correlator has the asymptotic behavior
        C(t) −−−→
        t→∞
        1
        2EM
        h0|Oˆ
        Γ|MihM|Oˆ
        Γ0|0ie
        −EMt
        , (2.23)
        from which we can extract the energy of the ground state and the product of matrix
        elements h0|Oˆ
        Γ|MihM|Oˆ
        Γ0|0i. Of course, on the lattice, the time t is always finite and
        there are contributions of higher excited states which fall off exponentially with time
        with an exponent proportional to E2−E1, the energy difference between the first excited
        state and the ground state.
        Since the propagator becomes noisier at large time, it is necessary to reduce the
        contribution of excited states as much as possible. A first possibility is to choose an
        2.4 Correlators 37
        interpolating field with a large overlap with the desired state, this can be achieved by
        using smearing techniques (Section 2.6). In the next section, I introduce the Generalized
        Eigenvalue Problem: using many interpolating operators with the same quantum numbers, we will see how the contribution of excited states can be removed in an efficient
        and systematic way. It will also be particularly useful to extract information about
        excited states in Chapters 4 and 6.
        2.4.3 Evaluation on the lattice
        On the lattice, the correlation function is estimated via the formula (2.6) and I will
        now explain in details the procedure in the case of a two-point correlation function.
        They will be used in Chapters 3, 4 and 6. The correlation function we are interested in
        is
        C(t) = hOΓ(t)O

        Γ0(0)i, (2.24)
        where OΓ and OΓ0 are two interpolating operators at vanishing momentum
        OΓ(t) = X
        ~x
        ψ2
        (x, t)Γψ1(x, t) , OΓ0(t) = X
        ~x
        ψ2
        (x, t)Γ0ψ1(x, t). (2.25)
        The correlation function is depicted in Figure 2.1. Then, the fermionic expectation value
        y, Γ
        0
        x, Γ
        Figure 2.1 – Two-point correlation function
        is written in terms of propagators by performing the Wick contractions as explained in
        Section 2.1. The formula (2.6) gives
        C(t) = 1
        Nc
        X
        Nc
        i=1
        hOΓ(t)O

        Γ0(0)iF
        =
        1
        Nc
        X
        Nc
        i=1
        X
        x,y
        hψ2
        (x, t)Γψ1(x, t) · ψ1
        (y, 0)Γ
        0
        ψ2(y, 0)iF
        = −
        1
        Nc
        X
        Nc
        i=1
        X
        ~x,~y
        Tr h
        G2(y, 0; x, t)ΓG1(x, t; y, 0)Γ
        0
        i
        ,
        where we sum over lattice gauge configurations and take the trace over spinor and color
        indices. So, for each gauge configuration, we need to compute the quark propagators
        G1 and G2 and then evaluate the trace by performing the correct contractions. The
        correlation function is finally obtained by averaging over all gauge configurations. In
        this work, I will always use two degenerate dynamical quarks, therefore the propagator
        G1 and G2 are numerically the same (but formally, they are different, in particular the
        contractions between ψ1 and ψ2 must not be considered since only non-singlet flavor
        interpolating operators are used). Usually, we can also use γ5-hermiticity to express the
        38 CHAPTER 2. Computation of observables in lattice QCD
        forward Dirac propagator G(x; y) in terms of the backward Dirac propagator G(y; x),
        namely G(x; y) = γ5G(y; x)
        †γ5 (the Hermitian conjugation refers to spinor space only).
        In the case of the above two-point correlation function, we obtain
        C(t) = −
        1
        Nc
        X
        Nc
        i=1
        X
        ~x,~y
        Tr h
        G(y, 0; x, t)Γγ5G(y, 0; x, t)γ5Γ
        0
        i
        ,
        and only one inversion is needed.
        2.5 The Generalized Eigenvalues Problem
        Using just one interpolating field, extraction of ground state information is often not
        very precise and the signal gets even worse for the first excited state. Therefore, more
        sophisticated methods are needed. The idea is to use different interpolating operators,
        with different overlaps with the excited states, and combine them to create an improved
        operator with the largest overlap with the ground state. This can be done systematically
        by solving a Generalized Eigenvalue Problem.
        We consider several operators Oi with the same quantum numbers, then the correlation matrix is
        Cij (t) = hOi(t)O

        j
        (0)i =
        X∞
        n=1
        ZniZ

        mj e
        −Ent
        , i, j = 1, · · · , N
        where Zni =
        1
        2En
        h0|Oˆ
        i
        |Bni corresponds to the strength of the overlap between the interpolating field Oi and the n
        th excited state. The Generalized Eigenvalue Problem [42]
        consists in solving the matrix equation
        C(t)vn(t, t0) = λn(t, t0)C(t0)vn(t, t0), (2.26)
        where vn(t, t0) and λn(t, t0) are respectively the generalized eigenvectors and eigenvalues.
        In the following, we assume that t0 > t/2, this condition is necessary to ensure a small
        contribution of the excited states [42]. From the eigenvalues, we can extract the different
        energy levels by considering the following estimator
        E
        eff
        n
        (t, t0) = −∂t
        log λn(t, t0) = 1
        a
        log λn(t, t0)
        λn(t + a, t0)
        = En + O

        e
        −∆EN+1,nt

        , (2.27)
        where En is the exact energy of the n
        th state and ∆EN+1,n = EN+1 − En is the energy
        difference between the n
        th and (N + 1)th states. This formula has to be compared with
        the case where only one interpolating field is used, in this case the suppression factor is
        only O(exp(−(E2 −E1)t)). It is then advantageous to have a large basis of interpolating
        fields. However, the GEVP tends to be unstable when large basis are used, mainly if
        the interpolating fields are not sufficiently different. In practice, in this work, the choice
        N = 3 − 5 seems optimal.
        From the eigenvectors, we can also build improved interpolating operators having
        the optimized overlap with the desired states, reducing the contamination from higher
        excited states. First, we define:
        Qˆeff
        n
        (t, t0) = Rn(t, t0)

        O, v ˆ
        n(t, t0)

        , (2.28
        2.6 Smearing 39
        where Rn is a normalization coefficient given by
        Rn(t, t0) = (vn(t, t0), C(t)vn(t, t0))−1/2

        λn(t0 + a, t0)
        λn(t0 + 2a, t0)
        t/(2a)
        , (2.29)
        and where (a, b) = a

        i
        bi
        is the inner product over eigenvector indices. Then, this operator
        can be used as an effective creation operator, namely we have
        e
        −Ht
        Qˆeff
        n
        (t, t0)

        |0i = |ni + O

        e
        −∆EN+1,nt0

        at fixed t − t0 . (2.30)
        Again, the magnitude of the contamination from higher excited states is small and
        decreases when increasing the value of t0.
        We can now apply these results in the case of a matrix element of the form Mn =
        h0|Pˆ|ni to obtain:
        Meff
        n = h0|P eˆ −Ht
        Qˆeff
        n
        (t, t0)

        |0i = hP(t)

        Q
        eff
        n
        (t, t0)

        i = Mn + O

        e
        −∆EN+1,nt0

        .
        (2.31)
        Using eq. (2.28), we can express this estimator in terms of eigenvalues and eigenvectors:
        Meff
        n
        (t, t0) = Rn(t, t0)

        Ce(t), vn(t, t0)

        , (2.32)
        where Cei(t) = hP(t)O

        i
        (0)i.
        2.6 Smearing
        Another technique used to improve the quality of the signal is called smearing. It is
        a transformation where each gauge link variable Uµ(x) is replaced by an average of the
        gauge link variables along certain paths connecting the endpoints of the original link.
        In particular, it reduces the short distance fluctuations of the quantum field without
        affecting its IR structure: indeed, the smearing transformation consists in adding irrelevant operators and their contributions vanish in the continuum limit. It is extremely
        useful to reduce the gauge noise of observables and many different algorithms exists. In
        this work, we will use two of them: the APE and the HYP smearings.
        Smearing can also be used on the fermionic field to increase the overlap of an interpolating operator with the ground state. In particular, in this work, the different
        operators used in the Generalized Eigenvalue Problem basis will usually correspond to
        different levels of Gaussian smearing applied to some local operator.
        2.6.1 APE smearing
        The APE smearing was introduced by the APE Collaboration [43], the idea is to
        replace each link variable Uµ(x) by a weighted average of this link and the surrounded
        staples
        Ueµ(x) = (1 − α)Uµ(x) + α
        6
        X
        ν6=µ
        Cµν(x), (2.
        40 CHAPTER 2. Computation of observables in lattice QCD
        Uµ(x)
        x x + aµˆ
        Figure 2.2 – Illustration of the four staples in an hyperplan containing the original link
        Uµ(x). The last two staples lie out of this hyperplan.
        where the staples Cµν(x) correspond to the six shortest paths starting from the point
        x and ending at the point x + aµˆ (see Figure 2.2). The transformation (2.33) does not
        belong to SU(3) and the new link variable has to be projected back to SU(3):
        U
        APE
        µ
        (x) = ProjSU(3) Ueµ(x). (2.34)
        Finally, this smearing procedure can be iterated several times.
        2.6.2 HYP smearing
        The HYP smearing (hypercubic smearing) [44] can be seen as a generalization of the
        APE smearing where fat links are now constructed from links which lie in hypercubes
        containing the original link. The smoothing procedure is done in three steps with
        coefficients (α1, α2, α3). In this work, it will be applied to the time-links of heavy-light
        correlation functions, in this case one has
        U
        HYP
        0
        (x) = ProjSU(3)  »
        (1 − α1)U0(x) + α1
        6
        X
        ±i6=0
        Vei;0(x)Ve0;i(x +ˆi)Ve†
        i;0(x + ˆ0)#
        ,
        where the decorated links Veµ,ν(x) are defined by
        Veµ;ν(x) = ProjSU(3)  »
        (1 − α2)Uµ(x) + α2
        4
        X
        ±ρ6=ν,µ
        V ρ;ν,µ(x)V µ;ρ,ν(x + ˆρ)V

        ρ;ν,µ(x + ˆµ)
        #
        ,
        and finally the decorated links V µ,ν(x) is defined by
        V µ;ν,ρ(x) = ProjSU(3)  »
        (1 − α3)Uµ(x) + α3
        2
        X
        ±η6=ρ,ν,µ
        Uη(x)Uµ(x + ˆη)U

        η
        (x + ˆµ)
        #
        .
        The optimal choice obtained in ref. [44] corresponds to the HYP1 action and is given by
        ~αHYP1 = (0.75, 0.6, 0.3). Another choice proposed in ref. [45] after minimizing the noise
        to signal ratio is called HYP2 and is given by ~αHYP2 = (1.0, 1.0, 0.5).
        2.7 Error estimation 41
        2.6.3 Gaussian smearing
        While APE and HYP smearings are applied to the gauge field and used to reduce
        the noise coming from short distance fluctuations, the Gaussian Smearing [46] is applied
        to the fermionic field and is defined by
        ψ
        (k)
        (x) = (1 + κG∆)nk ψ(x), (2.35)
        where ∆ is the 3-d Laplace operator defined in Appendix A, nk is the number of steps,
        and κG is the coupling strength of the nearest neighbors in space directions. Gaussian
        smearing is often combined with gauge link smearing where the Laplace operator is itself
        constructed from fat links. Intuitively, starting from a local source, the transformation
        (2.35) leads to a non local source with a gaussian distribution, the radius of the source
        rk = 2a

        κGnk increases with the number of iterations. Since mesons are extended
        objects, the smeared interpolating field ψ
        (k)
        is expected to have a better overlap with
        the ground state level as depicted in Figure 2.3.
        0.38
        0.4
        0.42
        0.44
        0.46
        0.48
        0.5
        0.52
        0.54
        0.56
        0.58
        2 4 6 8 10 12 14 16
        t/a
        nk
        = 22
        nk
        = 133
        nk
        = 338
        Figure 2.3 – Effective mass meff(t) = log(C(t)/C(t + a)) using heavy-light two-point
        correlation functions for the B meson computed with different levels of smearing. Here
        κG = 0.1 and nk = (33, 133, 338).
        2.7 Error estimation
        In a Monte Carlo simulation, the Markov chain has a finite size (typically of the
        order of 104
        ) and the same configurations are used to compute different observables
        which are therefore correlated. Moreover, since the Markov Process generates the new
        gauge configuration from the previous one, it also introduces autocorrelation. We would
        like to estimate the statistical error associated to an observable computed on the lattice
        (using eq. (2.6)) taking into account all correlations. I will briefly discuss two techniques
        used in this work. The first one is the Jackknife method, and is based on re-sampling
        methods. The second is the Gamma Method [47] where one tries to estimate the full
        autocorrelation matrix. Systematic errors are not considered here and will be the subject
        of the next section.
        42 CHAPTER 2. Computation of observables in lattice QCD
        In Lattice QCD, the primary observables are usually correlation functions. We label
        a set of P primary observables (with N measurements for each) by:

        n
        p
        | p = 0 · · · P ; n = 1 · · · N} . (2.36)
        2.7.1 The Jackknife Procedure
        The Jackknife procedure was originally introduced by Quenouille for bias reduction.
        Later Tukey noticed that the same technique turns out to be useful to estimate the
        variance. It has the advantage to be easily implemented and also very fast. For a review
        see [48].
        Mean value estimate
        The mean value αˆ of a primary observable is given by the following unbiased estimator
        αp =
        1
        N
        X
        N
        i=1
        α
        i
        p
        . (2.37)
        Then, for each secondary observable f, function of the primary observables αp, an
        estimator of the true mean ˆf = f(ˆα) is given by
        f = f(αp). (2.38)
        However, this estimator has generally a bias of order 1/N which can be corrected by the
        Jackknife procedure (formula (2.42)). However, since the statistical errors in the Monte
        Carlo simulation are of order 1/

        N, this bias can usually be safely neglected.
        To estimate the variance, one would naively use the following formula:
        σ
        2
        (f) = 1
        N(N − 1)
        X
        N
        i=1

        f(α
        i
        p
        ) − f
        2
        , (2.39)
        but f(α
        i
        p
        ) is generally a spread distribution, hf(α
        i
        p
        )i 6= ˆf, and the previous formula
        fails. Moreover it does not take into account autocorrelations. The blocking procedure
        described in the next section will address the second issue and the Jackknife resampling
        method will propose a solution to the first one.
        Blocking
        We divide our N measurements into NB blocks including B consecutive measurements (N = NB × B). The block average β
        b
        p
        of the primary observables p is then
        β
        b
        p =
        1
        B
        X
        B
        i=1
        α
        i+(b−1)B
        p
        , b = 1, · · · , NB . (2.40)
        If the block size is chosen to be larger than the autocorrelation time (N B τ ), the
        block variables can be considered as independent new variables characterized by their
        mean β
        b
        p
        and their variance. But, obviously, the mean and the variance are invariant
        under such blocking transformation. Therefore, the statistical error on the primary
        observables αp could be estimated via the naive estimator (2.39) using the block variables
        β
        b
        p
        . The problem appears when non-linear functions of the primary observables are
        considered since hf(β
        b
        p
        )i 6= ˆf. In this case, the Jackknife procedure can be used
        2.7 Error estimation 43
        Jackknife samples
        The Jackknife samples (bins) are defined by
        J
        b
        p =
        1
        N − B
        X
        N
        i=1
        α
        i
        p −
        X
        B
        i=1
        α
        i+(b−1)N
        p
        !
        =
        1
        N − B

        Nαp − Bβb
        p

        , (2.41)
        and correspond to the full sample where the block b has been deleted. Consequently,
        each jackknife block contains most of the information (especially when B = 1, the
        one-deleted Jackknife) and are clearly not independent.
        From the Jackknife sample, the bias of order 1/N in (2.38) can be corrected by
        considering
        f J = f − (NB − 1)
        f 0 − f

        , f 0 =
        1
        NB
        X
        NB
        n=1
        f(J
        b
        p
        ). (2.42)
        Error estimate
        Finally, an unbiased estimator of the variance for a secondary variable is given by
        the Jackknife variance (see ref. [49] for a proof),
        σ
        2
        J
        (f) = NB − 1
        NB
        X
        NB
        b=1

        f(J
        b
        p
        ) − f 0
        2
        , (2.43)
        where the pre-factor NB−1
        NB
        corrects the fact that our variables are not independent but
        correspond to a resampling of the original one. In eq. (2.43) the mean estimate f could
        also be used instead of f 0
        . In practice, to check the reliability of the result, we can check
        that the result does not depend on the block size B which should be chosen larger than
        the autocorrelation time. Finally, using the Jackknife procedure to propagate errors
        has the advantage to take into account cross-correlations automatically, contrary to the
        standard propagation of errors where they must be added explicitly.
        2.7.2 The Gamma method
        The Γ-method is described in details in ref. [47] and I just recall the main formulae.
        The central point is the estimation of the full autocorrelation matrix
        Γnm(t) = 1
        N − t
        X
        N−t
        i=1

        α
        i
        n − αn
        α
        i+t
        m − αm

        , (2.44)
        for times t N, in terms of the primary observables αn. To estimate the error associated
        to a secondary observable f, which depends on the primary observables αn, we first
        evaluate the projected autocorrelation function defined by
        Γf (t) = X
        n,m
        fnfmΓnm(t) , fn =
        ∂f
        ∂αn
        (αn), (2.45)
        where fn is the partial derivative of f with respect to αn and evaluated at the central
        value αn. In practice, the derivatives are computed numerically. In particular, Γ
        44 CHAPTER 2. Computation of observables in lattice QCD
        corresponds to the variance of f neglecting the autocorrelation. Finally, we can define
        the integrated autocorrelation time by
        τint,f (W) = 1
        2
        +
        X
        W
        t=1
        ρf (t) , ρf (t) = Γf (t)
        Γf (0) , (2.46)
        where W is a cutoff (summation window) needed due to the finite size of the Markov
        chain. Furthermore, since the noise of the autocorrelation function is roughly constant
        in time, the signal is dominated by noise at large time. The statistical error of the
        observable f from N measurements is finally given by
        σ
        2
        Γ,f =
        Γf (0)
        N
        × 2 τint,f (W). (2.47)
        In the case where autocorrelation is absent, we have τint,f = 1/2 and one recovers the
        expected estimator for the variance. The value of the cutoff W should be large enough
        so that the remaining part in eq. (2.46) is indeed small, but not too large to include only
        terms with negligible noise. In ref. [47], the author proposed an automatic procedure
        for searching the window W and a typical example is given in Figure 2.4. However,
        neglecting the tail of the autocorrelation function leads to an underestimation of τint
        and, therefore, of the statistical error.
        -0.05
        0
        0.05
        0.1
        0.15
        0.2
        0.25
        0.3
        0 50 100 150 200 250 300 350
        ρ F(t)
        t
        W
        Figure 2.4 – Typical example for the determination of the windows.
        Therefore, an improved estimator for τint,f was proposed in ref. [50] which takes into
        account the tail of the autocorrelation matrix. This critical slowing down is due to
        the presence of slow modes in the Monte Carlo transition matrix and the associated
        characteristic time, τexp, depends on the algorithm. Each observable couples differently
        to these slow modes and, when this coupling is small, the tail of the autocorrelation
        function is difficult to estimate. In the aforementioned reference, the author gives an
        upper bound for the neglected part in eq. (2.46) which corresponds to τexp ρf (W) and
        then can be used to obtain a more conservative estimate of the error. Since the topological charge is particularly sensitive to the slow modes, it is one of the most popular
        quantities used to estimate τexp.
        Once τexp is approximately known, the idea is to choose a second window Wu, where
        the signal differs significantly from zero, and to estimate the remaining part in eq. (2.46)
        by ρf (t) ≈ ρ(Wu) e
        −(t−Wu)/τexp for t > Wu. Then, one obtains
        τ
        (2)
        int,f (Wu) = τint,f (Wu) + τexpρ(Wu), (2.48)
        2.8 Setting the scale and the continuum limit 45
        where the first part is computed explicitly in the region where it is rather well determined
        by using eq. (2.46) and the second part is an estimation of the contribution of the tail.
        The statistical error is now given by
        σ
        2
        Γ,f =
        Γf (0)
        N
        × 2 τ
        (2)
        int,f (Wu). (2.49)
        An illustration of the window procedure is given in Figure 2.5.
        -0.05
        0
        0.05
        0.1
        0.15
        0.2
        0.25
        0.3
        0 50 100 150 200 250 300 350
        ρ F(t)
        t
        Wu
        Figure 2.5 – Improved estimator for the integrated autocorrelation time.
        2.8 Setting the scale and the continuum limit
        In the first chapter, the action was formulated in terms of dimensionless quantities parametrized by the bare coupling constant g0 and the bare quark masses mi (or,
        equivalently, by β and the hopping parameters κi). In the case of Nf = 2 simulations,
        where only two degenerate dynamical quarks are considered, we are left with two free
        parameters (β, κ). The first one sets the global scale of the simulation and the second
        one is used to tune the quark mass.
        Setting the scale
        Any observable is obtained in lattice units and, to compare the result with experiment, it is convenient to convert it in physical units. This step, called setting the scale,
        consists in computing the lattice spacing in physical units by imposing one observable,
        computed on the lattice, to match its physical value. Setting the scale and adjusting
        the quark masses is a coupled problem. Therefore, to set the scale one usually chooses
        a physical observable A which depends weakly on the quark masses so that the two
        steps can be considered as independent. The scale is then obtained by imposing the
        condition 1
        a[MeV−1
        ] = (aA)lat
        Aexp[MeV] ,
        where (aA)lat is the value of the observable computed on the lattice and Aexp is its
        physical value in MeV. Typical observables are the omega baryon mass [51], or the pion
        and kaon decay constants fπ, fK [52]. The observable should be chosen with care: beside
        the fact that it should not depend too much on the quark masses, it should also be easily
        1. The conversion factor between fm and MeV is 1 fm−1 = 197.327 MeV
        46 CHAPTER 2. Computation of observables in lattice QCD
        computed on the lattice with a small statistical error to allow for a precise estimation.
        The systematic errors should also be well under control: in particular, the mass of the
        ρ meson is not an optimal choice since it corresponds to a resonance. Finally, the error
        on the scale will affect all quantities expressed in physical units but also the continuum
        and chiral extrapolations (see Section 2.9).
        The quark masses are determined in a second step. In this work, up and down
        quarks are assumed to be degenerate and their mass can be set by computing just one
        observable, like the pion mass. First, the pion mass is computed in lattice units (amπ)lat,
        then the result is converted in physical units using the previous estimation of the lattice
        spacing:
        mπ[MeV] = (amπ)lat
        a[MeV−1
        ]
        .
        There is an ambiguity in setting the scale at finite lattice spacing due to discretization
        errors, but this ambiguity should vanish in the continuum limit and does not affect the
        results extrapolated to a → 0. Nevertheless, since we work with Nf = 2 dynamical
        quarks, an ambiguity arises from the choice of observables used to match the theory
        with experiment.
        The continuum limit
        Lattice QCD offers a natural regularization of the theory both in the infrared (IR)
        and in the ultraviolet (UV) regimes (via the lattice spacing a and the spatial extent L
        of the lattice). To compare the results with experiment, we would like to remove both
        cut-offs. Neglecting volume effects, this is performed by taking the limit a → 0 at fixed
        physical volume (corresponding to larger and larger lattice resolutions L/a).
        2.9 Discussion of systematic errors
        A typical lattice simulation is performed in a physical volume of a few fermi (L ∼
        3 fm) and at lattice spacing of the order a ∼ 0.06 fm corresponding to lattice resolutions
        L/a ∼ 50. In this work, we also work at unphysical quark masses where the pion
        mass lies in the range [190 − 450] MeV. Therefore, many systematic errors have to be
        considered.
        Discretization effects
        Due to the finite lattice spacing a, one expects discretization errors linear in the
        lattice spacing. However, improved actions and operators can be used to cancel O(a)
        artifacts. In the case of Wilson fermions, this is done by adding the Clover term (1.30)
        in the action and higher-dimensional counterterms to the currents of interest. The
        theory is then called O(a)-improved and the first corrections for on-shell quantities
        are quadratic in the lattice spacing. To evaluate discretization errors, we can perform
        several simulations, at different values of the lattice spacing a, and then extrapolate to
        the continuum limit. To keep the physical volume V constant, the lattice resolution
        L/a has to be increased and the numerical cost of the simulations grows. Therefore,
        O(a)-improvement can help to reduce the range over which the lattice spacing should
        vary.
        2.9 Discussion of systematic errors 47
        Volume effects
        This source of systematic errors is due to the finite size of the lattice: due to periodic boundary conditions, virtual pions can travel around the lattice. The associated
        corrections O(e
        −mπL
        ) were computed in ref. [53] and decrease exponentially with the
        volume. The CLS ensembles used in this work fulfills the criterion Lmπ > 4 and volume
        effects are expected to be very small. Therefore, we will not perform any infinite volume
        extrapolation.
        Dynamical quarks
        Evaluating the quark propagator on the lattice becomes more and more difficult
        as the pion mass gets closer to its physical value. Therefore, many lattice simulations
        are performed at non-physical quark masses. To estimate the associated systematic
        error, different simulations at several quark masses are performed and the results are
        extrapolated to the chiral limit using fit formulae inspired from chiral perturbation
        theory [54, 55]. A second source of systematic errors comes from the fact that only two
        dynamical quarks are used in the simulations (quark loops with c, s, b and t quarks are
        neglected) and the associated error is more difficult to estimate.

        • #56111 Répondre
          Jeanmonnaie
          Invité

          Tu peux continuer.
          Je ne souhaite pas rester dans ce forum particulièrement.
          Tu finiras par te lasser, et chouiner que plus personne ne veut te parler.

    • #56104 Répondre
      deleatur
      Invité

      In the Monte Carlo algorithm, we would like to perform global transformations of
      the gauge field. First, because local transformations would also require the computation
      of the full determinant (which is a non-local quantity) and secondly to reduce autocorrelations. However, after a global transformation, the corresponding value of the action
      can change a lot and the new gauge configuration is unlikely to contribute significantly
      to the action. Therefore, it would require very small steps in the update algorithm leading to high autocorrelation and a large number of updating. The HMC algorithm [35],
      presented in this section, solves these problems. It allows for global transformations
      while maintaining a good efficiency.
      As explained in the previous section, the update algorithm used to generate the Markov
      chain is defined by its transition probability P. This probability should satisfy the
      ergodicity and the detailed balance conditions. In our case, the probability is written
      P = PEPA where:
      — PE is the probability to generate {U}n+1 from {U}n during the update process.
      It will depend on the details of the algorithm.
      — PA is the acceptance probability to decide whether or not the new gauge configuration is kept. It is chosen such that the detailed balance property is satisfied.
      Now, the idea is to interpret the action (2.8) as a potential, associated to a fictitious
      Hamiltonian, and to add a new set of momenta fields Π which play the role of conjugate
      variables associated to the gauge field Uµ. Gauge links Uµ(x) are SU(3) group elements,
      so we have one su(3) Lie algebra element Πµ(x) per site x and per direction µ. Since
      2.2 Monte Carlo simulations 31
      the action S = SG + SF does not depend on the momenta, they can be factorized out
      and do not change the physical results:
      hOi =
      1
      Ze
      X
      x∈Λ,µ
      Z
      D[Πµ]D[Uµ]D[φ]D[φ

      ] hO[Uµ]iF e
      −(SG+SPF+
      P 1
      2Π2
      )
      =
      1
      Z
      X
      x∈Λ,µ
      Z
      D[Uµ]D[φ]D[φ

      ] hO[Uµ]iF e
      −(SG+SPF)
      ,
      where Ze is defined as in (2.1) but now with the total action including the momenta Πµ.
      Then, the total action, including the pseudofermion and momenta fields is:
      SHMC =
      1
      2
      Π
      2 + SQCD(U) = 1
      2
      Π
      2 + SG(U) + φ

      D

      (U)D(U)
      −1
      φ . (2.12)
      This action describes the evolution of a classical system in a 4-dimensional space. The
      associated time is not related to the physical time but rather to the computer time which
      labels the gauge configurations. This is called Molecular Dynamics (MD). Quantum
      fluctuations of the quantum field in 4 dimensions are described by the trajectory of a
      classical system in a 5 dimensional space-time. The Hamilton-Jacobi equations for this
      classical system are



      U˙ =
      δSHMC
      δΠ
      ,
      Π =˙ −
      δSHMC
      δU = −
      δSG
      δU − φ

      (M†M)
      −1 δM†
      δU (M†
      )
      −1 + M−1 δD
      δU (M†M)
      −1

      φ
      (2.13)
      where the right hand side of the second equation is called the force term and its exact
      expression depends on the lattice action used in the simulation. The first equation is
      numerically easy to solve but the second one is much more difficult since it requires the
      evaluation of the inverse Dirac matrix. Finally, the acceptance probability PA is chosen
      to be
      PA({U, Π} → {U, Π}
      0
      ) = min
      1 , e−S(U0
      ,Π0
      )+S(U,Π)
      , (2.14)
      so that, the total probability P of the Markov process is
      P({U} → {U}
      0
      ) = Z
      D[Π]D[Π0
      ] PM[Π] · PE({U, Π} → {U, Π}
      0
      ) · PA({U, Π} → {U, Π}
      0
      )
      (2.15)
      where PM ∼ exp(−
      1
      2
      PΠ2
      ) is a gaussian distribution. One can prove that this probability P satisfies the detailed balance condition if we also impose that the evolution
      equations are reversible and area preserving. In the continuum theory, this is always
      true thanks to Liouville’s theorem but not necessarily with integration algorithms where
      a discrete step size is used. A typical example of algorithm used in simulations is the
      LeapFrog algorithm.
      During the molecular dynamics, the system lies on a hyper surface of constant energy
      and explores only a subspace of the full phase space (Π, U). Nevertheless, during this
      step, the dynamics can produce gauge configurations with very different values for the
      action SQCD(U) associated to the QCD action only. The heat bath step, at the beginning
      of each MD trajectory, refreshes randomly the momenta of the system and then ensures
      ergodicity
      32 CHAPTER 2. Computation of observables in lattice QCD
      An interesting property of this algorithm is that, since the action is a constant
      of motion during the molecular dynamic, the acceptance rate is theoretically equal to
      one. But, because of numerical rounding errors during the Leapfrog integration, the
      acceptance rate is not exactly one but is still very high (errors are of order O(
      2
      ) for a
      first order integrator and the integration step is usually chosen such that PA ≈ 80%).
      Summary :
      The use of an heat-bath algorithm and a molecular dynamics are at the origin of the
      name Hybrid Monte Carlo. The algorithm can finally be summarized as follow:
      — At the beginning of each step of the MC, the momenta associated with the fermion
      fields are generated randomly according to a gaussian distribution via an heat-bath
      algorithm. Pseudofermion fields are generated in two steps: first, a random field χ
      is generated according to a gaussian distribution and secondly, the pseudofermions
      are obtained via φ = Dχ.
      — Then, the gauge fields and momenta are updated using the molecular dynamics
      evolution eq. (2.13). During this step, the pseudofermion fields are kept constant.
      — At the end, the new gauge configuration is accepted with a probability PA given by
      eq. (2.14), this step corrects for the numerical errors introduced by the Leapfrog
      algorithm. If the configuration is rejected, we restart from the previous state which
      is included again in the Markov chain.
      2.3 The quark propagator
      2.3.1 Definition
      Once gauge configurations are generated, the next step is to evaluate the quark
      propagator appearing in Wick contractions in eq. (2.4). In lattice QCD, the Dirac
      operator, for a given flavour, is written Dab
      αβ(y, x) where (a, α, y) and (b, β, x) are
      respectively the color, spinor and space-time indices associated to the sink and to the
      source. The size of the matrix is then 12N × 12N where N is the total number of sites
      of the lattice. Finally, the propagator, G, is defined as the inverse of the Dirac operator:
      X
      y∈Λ
      D
      ab
      αβ(x, y)G
      bc
      βγ(y, z) = δ(x, z)δacδαγ , (2.16)
      and depends on the lattice action used for the simulation. Since the Dirac operator
      only involve neighboring points of the lattice, the matrix is sparse and algorithms based
      on conjugate gradient methods are particularly well suited. Nevertheless, the exact
      all-to-all inversion, i.e., the solution from each source point to each sink point of the
      lattice, is impossible with present day computational capabilities (it would requires
      12N ∼ 108
      inversions for typical lattices). The problem can be simplified by considering
      the following equation (spinor and color indices are omitted for simplicity):
      D(x, y)ψ(y) = δ(x) , ψ(y) = G(y, x)δ(x) , (2.17)
      where the solution vector, ψ(y), corresponds to the one-to-all solution for a point source
      placed at the origin δ(x). It would correspond to one row of the full propagator matrix
      and requires 12 inversions per lattice site. Moreover, the backward propagator can be
      2.3 The quark propagator 33
      obtained from the forward propagator by using the γ5-hermiticity relation G(y, x) =
      γ5G(x, y)
      †γ5.
      A drawback of this method is that only a small part of the gauge information is used
      since we don’t exploit the full translational invariance of the propagator (the source is
      fixed). Since generating gauge configurations is extremely costly, it would be preferable
      to exploit them to reduce the gauge noise. Moreover, point-to-all propagators are not
      suited when using non-local interpolating fields.
      2.3.2 All-to-all propagators
      Solutions exist to evaluate all-to-all propagators and are based on stochastic methods
      [36]. The idea is to use, for each gauge configuration, an ensemble of Ns stochastic
      sources satisfying
      lim
      Ns→∞
      1
      Ns
      X
      Ns
      s=1
      η
      a
      α
      (x)s

      η
      b
      β
      (y)s

      = δαβδ
      abδx,y , (2.18)
      where each component is normalized to one, η
      a
      α
      (x)

      [r]
      η
      a
      α
      (x)[r] = 1 (no summation). This
      can be implemented using random gaussian numbers on each site of the lattice, for each
      color and spinor index. Then the Dirac operator is inverted for each source:
      D(x, y)ψ(y)s = η(x)s , Dab
      αβ(x, y) ψ
      b
      β
      (y)s = η
      a
      α
      (x)s ,
      where ψ
      a
      α
      (x)s is the solution vector of size 12N. An unbiased estimator of the propagator
      is then given by contracting the solution vector with the corresponding source:
      ψ
      a
      α
      (x)s = G
      ab
      αβ(x, y) η
      b
      β
      (y)s ⇒ G
      ab
      αβ(x, y) = 1
      Ns
      X
      Ns
      s=1
      ψ
      a
      α
      (x)s η
      b
      β
      (y)

      s
      . (2.19)
      Of course, the number of stochastic sources is always finite and, since the inversion of the
      Dirac operator is often the most demanding part of the algorithm, it can be quite limited.
      Then, the condition (2.18) is only approximately fulfilled and the quark propagator
      obtained by using eq. (2.19) can be very noisy. Indeed, it requires the cancellation of
      the U(1) noise on the whole lattice whereas the signal decreases exponentially with the
      space-time separation. Therefore, even if some terms should cancel in average, they can
      contribute significantly to the variance. An extremely useful tool to reduce the noise is
      time dilution [36].
      2.3.3 Time dilution
      In general, dilution consists in splitting the source η into several secondary (diluted)
      sources with vanishing overlap. For example, in time dilution, a secondary source is
      defined on a single time slice and equal to zero everywhere else. The advantage is that
      the condition (2.18) is automatically fulfilled for tx 6= ty. Since the time dependence of
      the quark propagator is known to be large, this leads to a significant variance reduction
      η(~x, t) = X
      τ
      η(~x, t)[τ]
      , η(~x, t)[τ] = 0 unless t = τ .
      34 CHAPTER 2. Computation of observables in lattice QCD
      The Dirac operator is now inverted on each diluted source and the full propagator is
      recovered by summing over all secondary sources:
      G
      ab
      αβ(x, y) = 1

      X
      τ
      ψ
      a
      α
      (x)[τ] η
      b
      β
      (y)

      [τ]
      ,
      where, for full-time dilution, Nτ = Ns × T. For example, as shown in ref. [36], on a
      323 × 64 lattice, the variance will be smaller when using one complete source fully timediluted rather than 64 sources without dilution. Finally, dilution could also be applied
      to spinor or color indices. The limit where dilution is applied to all space-time, color and
      Dirac indices would correspond to the computation of the exact all-to-all propagator.
      2.3.4 Numerical implementation
      In this work, we used the dfl_sap_gcr inverter from the DD-HMC package [37, 38].
      It is based on a conjugate gradient algorithm with Schwarz-preconditioning [39] and low
      mode deflation [40, 41] which significantly reduces the increase in computational cost as
      the quark mass is lowered.
      Krylov Subspace Iteration Methods
      The algorithm to compute the quark propagator is based on a conjugate gradient
      algorithm. This kind of algorithms (Krylov Subspace Iteration Methods) are well suited
      for large and sparse matrices like the Dirac operator.
      Spectral decomposition
      The low modes of the Dirac operator lead to numerical difficulties when the quark
      mass is lowered. The idea is to compute exactly the low modes (< N0) of the operator
      and to treat them separately using the decomposition
      D
      −1
      (x, y) = X
      N0
      i=1
      1
      λi
      v
      (i)
      (x) ⊗ v
      (j)
      (y)
      † + De−1
      (x, y),
      where (v
      (i)
      , λi) are respectively the eigenvectors and eigenvalues. The remaining part of
      the Dirac operator, De−1
      (x, y), is then better conditioned (since low modes have been
      suppressed) and easier to invert numerically. The problem comes from the fact that the
      eigenvalue density increases with the volume making the exact evaluation of the low
      lying eigenvalues impossible for large lattices. However, as shown in ref. [40], only a
      small number of the low lying modes needs to be solved exactly to capture the essential
      physics, such that the method can also be used for large volume.
      Standard optimizations
      Since the square of the Dirac operator only involves even or odd sites separately, one
      can use the so called even-odd preconditioning. It significantly reduces the condition
      number of the Dirac operator and leads to an acceleration of the solver. It also reduces
      the memory space needed to store the fermionic fields.
      2.4 Correlators 35
      2.4 Correlators
      In lattice QCD simulations, we are often interested in the special case of two- or
      three-point correlation functions. In this section I will explain in more details how
      the two-point correlation functions can be computed and an example of three-point
      correlation function will be given in Chapter 4. We will see that two-point correlation
      functions are useful to extract the energy levels of mesons or some simple matrix elements
      like decay constants.
      2.4.1 Interpolating operator
      An interpolating operator, O, associated to a bound state M, is an operator with
      a non-zero overlap with the state of interest. In particular, it must carry the same
      quantum numbers like parity, spin or flavour numbers. Then, for a scalar field, we have
      h0|O(x)|Mi =

      Ze−iP·x
      ,
      where √
      Z = h0|O| ˆ Mi is the overlap factor associated with the interpolating operator.
      Similarly, for a vector field
      h0|Oµ(x)|M(µ)i = µ

      Ze−iP·x
      ,
      where µ is the polarization of the field. In practice, the interpolating field couples to
      every particles with the same quantum numbers and different choices are possible. They
      lead to different overlap factors Z and couple differently with the excited states.
      The simplest interpolating operator can be constructed from one of the 16 linearly
      independent combinations of gamma matrices (denoted by Γ) such that it has the correct
      quantum numbers (see Table 2.1):
      O(x) = ψ1
      (x)Γψ2(x), (2.20)
      where ψ1 and ψ2 may correspond to different flavours. Generally, an interpolating
      operator for a particle with spatial momentum ~q is given by
      O~q(t) = 1
      V
      X
      ~x
      e
      −i~q·~x O(~x, t).
      In particular, to compute the mass of a meson, it is convenient to work at vanishing
      momentum, so we sum over all spatial lattice points. Finally, defining Γ = γ0Γ
      †γ0, we
      have
      O

      (x) = ψ2
      (x)Γψ1(x), (2.21)
      and the meson two-point correlation function at vanishing momentum is
      C(t) = hO(t)O

      (0)i =
      X
      ~x,~y,t0
      hO(~x, t0 + t)O

      (~y, t0
      )i,
      where I have used the translational invariance.
      36 CHAPTER 2. Computation of observables in lattice QCD
      J
      P C Γ
      Scalar 0
      ++ 1
      0
      +− γ0
      Pseudoscalar 0
      −+ γ5
      γ0γ5
      Vector 1
      −− γi
      γ0γi
      Axial 1
      ++ γ5γi
      Tensor 1
      +− γiγj
      Table 2.1 – Quantum numbers associated to some local interpolating operators of the
      form O(x) = ψ(x)Γψ(x)
      2.4.2 Asymptotic behavior
      In this section, I use the notation Oˆ for the time independent operator in the
      Schrödinger picture and O(t) for the time dependent operator in the Heisenberg picture.
      Then, using the spectral decomposition
      1 =
      X
      n
      Z
      d
      3pn
      (2π)
      32En
      |MnihMn| ,
      the two-point correlation function becomes
      C(t) = hO(t)O

      (0)i =
      X∞
      n=1
      1
      2En
      h0|O| ˆ MnihMn|Oˆ†
      |0ie
      −Ent
      , (2.22)
      where En is the energy of the n
      th state of the Hamiltonian and where I used the relativistic normalization of states hMn|Mmi = (2En)δnm. In general, due to periodic boundary
      conditions, the particles can also travel in the other direction. But, in this work, I will
      mostly study heavy-light mesons where the heavy quark propagates only forward in time
      (see Section 3.1.6), so I neglect these terms here. In particular, if we note M = M1 the
      ground state, then, at sufficiently large time, the correlator has the asymptotic behavior
      C(t) −−−→
      t→∞
      1
      2EM
      h0|Oˆ
      Γ|MihM|Oˆ
      Γ0|0ie
      −EMt
      , (2.23)
      from which we can extract the energy of the ground state and the product of matrix
      elements h0|Oˆ
      Γ|MihM|Oˆ
      Γ0|0i. Of course, on the lattice, the time t is always finite and
      there are contributions of higher excited states which fall off exponentially with time
      with an exponent proportional to E2−E1, the energy difference between the first excited
      state and the ground state.
      Since the propagator becomes noisier at large time, it is necessary to reduce the
      contribution of excited states as much as possible. A first possibility is to choose an
      2.4 Correlators 37
      interpolating field with a large overlap with the desired state, this can be achieved by
      using smearing techniques (Section 2.6). In the next section, I introduce the Generalized
      Eigenvalue Problem: using many interpolating operators with the same quantum numbers, we will see how the contribution of excited states can be removed in an efficient
      and systematic way. It will also be particularly useful to extract information about
      excited states in Chapters 4 and 6.
      2.4.3 Evaluation on the lattice
      On the lattice, the correlation function is estimated via the formula (2.6) and I will
      now explain in details the procedure in the case of a two-point correlation function.
      They will be used in Chapters 3, 4 and 6. The correlation function we are interested in
      is
      C(t) = hOΓ(t)O

      Γ0(0)i, (2.24)
      where OΓ and OΓ0 are two interpolating operators at vanishing momentum
      OΓ(t) = X
      ~x
      ψ2
      (x, t)Γψ1(x, t) , OΓ0(t) = X
      ~x
      ψ2
      (x, t)Γ0ψ1(x, t). (2.25)
      The correlation function is depicted in Figure 2.1. Then, the fermionic expectation value
      y, Γ
      0
      x, Γ
      Figure 2.1 – Two-point correlation function
      is written in terms of propagators by performing the Wick contractions as explained in
      Section 2.1. The formula (2.6) gives
      C(t) = 1
      Nc
      X
      Nc
      i=1
      hOΓ(t)O

      Γ0(0)iF
      =
      1
      Nc
      X
      Nc
      i=1
      X
      x,y
      hψ2
      (x, t)Γψ1(x, t) · ψ1
      (y, 0)Γ
      0
      ψ2(y, 0)iF
      = −
      1
      Nc
      X
      Nc
      i=1
      X
      ~x,~y
      Tr h
      G2(y, 0; x, t)ΓG1(x, t; y, 0)Γ
      0
      i
      ,
      where we sum over lattice gauge configurations and take the trace over spinor and color
      indices. So, for each gauge configuration, we need to compute the quark propagators
      G1 and G2 and then evaluate the trace by performing the correct contractions. The
      correlation function is finally obtained by averaging over all gauge configurations. In
      this work, I will always use two degenerate dynamical quarks, therefore the propagator
      G1 and G2 are numerically the same (but formally, they are different, in particular the
      contractions between ψ1 and ψ2 must not be considered since only non-singlet flavor
      interpolating operators are used). Usually, we can also use γ5-hermiticity to express the
      38 CHAPTER 2. Computation of observables in lattice QCD
      forward Dirac propagator G(x; y) in terms of the backward Dirac propagator G(y; x),
      namely G(x; y) = γ5G(y; x)
      †γ5 (the Hermitian conjugation refers to spinor space only).
      In the case of the above two-point correlation function, we obtain
      C(t) = −
      1
      Nc
      X
      Nc
      i=1
      X
      ~x,~y
      Tr h
      G(y, 0; x, t)Γγ5G(y, 0; x, t)γ5Γ
      0
      i
      ,
      and only one inversion is needed.
      2.5 The Generalized Eigenvalues Problem
      Using just one interpolating field, extraction of ground state information is often not
      very precise and the signal gets even worse for the first excited state. Therefore, more
      sophisticated methods are needed. The idea is to use different interpolating operators,
      with different overlaps with the excited states, and combine them to create an improved
      operator with the largest overlap with the ground state. This can be done systematically
      by solving a Generalized Eigenvalue Problem.
      We consider several operators Oi with the same quantum numbers, then the correlation matrix is
      Cij (t) = hOi(t)O

      j
      (0)i =
      X∞
      n=1
      ZniZ

      mj e
      −Ent
      , i, j = 1, · · · , N
      where Zni =
      1
      2En
      h0|Oˆ
      i
      |Bni corresponds to the strength of the overlap between the interpolating field Oi and the n
      th excited state. The Generalized Eigenvalue Problem [42]
      consists in solving the matrix equation
      C(t)vn(t, t0) = λn(t, t0)C(t0)vn(t, t0), (2.26)
      where vn(t, t0) and λn(t, t0) are respectively the generalized eigenvectors and eigenvalues.
      In the following, we assume that t0 > t/2, this condition is necessary to ensure a small
      contribution of the excited states [42]. From the eigenvalues, we can extract the different
      energy levels by considering the following estimator
      E
      eff
      n
      (t, t0) = −∂t
      log λn(t, t0) = 1
      a
      log λn(t, t0)
      λn(t + a, t0)
      = En + O

      e
      −∆EN+1,nt

      , (2.27)
      where En is the exact energy of the n
      th state and ∆EN+1,n = EN+1 − En is the energy
      difference between the n
      th and (N + 1)th states. This formula has to be compared with
      the case where only one interpolating field is used, in this case the suppression factor is
      only O(exp(−(E2 −E1)t)). It is then advantageous to have a large basis of interpolating
      fields. However, the GEVP tends to be unstable when large basis are used, mainly if
      the interpolating fields are not sufficiently different. In practice, in this work, the choice
      N = 3 − 5 seems optimal.
      From the eigenvectors, we can also build improved interpolating operators having
      the optimized overlap with the desired states, reducing the contamination from higher
      excited states. First, we define:
      Qˆeff
      n
      (t, t0) = Rn(t, t0)

      O, v ˆ
      n(t, t0)

      , (2.28
      2.6 Smearing 39
      where Rn is a normalization coefficient given by
      Rn(t, t0) = (vn(t, t0), C(t)vn(t, t0))−1/2

      λn(t0 + a, t0)
      λn(t0 + 2a, t0)
      t/(2a)
      , (2.29)
      and where (a, b) = a

      i
      bi
      is the inner product over eigenvector indices. Then, this operator
      can be used as an effective creation operator, namely we have
      e
      −Ht
      Qˆeff
      n
      (t, t0)

      |0i = |ni + O

      e
      −∆EN+1,nt0

      at fixed t − t0 . (2.30)
      Again, the magnitude of the contamination from higher excited states is small and
      decreases when increasing the value of t0.
      We can now apply these results in the case of a matrix element of the form Mn =
      h0|Pˆ|ni to obtain:
      Meff
      n = h0|P eˆ −Ht
      Qˆeff
      n
      (t, t0)

      |0i = hP(t)

      Q
      eff
      n
      (t, t0)

      i = Mn + O

      e
      −∆EN+1,nt0

      .
      (2.31)
      Using eq. (2.28), we can express this estimator in terms of eigenvalues and eigenvectors:
      Meff
      n
      (t, t0) = Rn(t, t0)

      Ce(t), vn(t, t0)

      , (2.32)
      where Cei(t) = hP(t)O

      i
      (0)i.
      2.6 Smearing
      Another technique used to improve the quality of the signal is called smearing. It is
      a transformation where each gauge link variable Uµ(x) is replaced by an average of the
      gauge link variables along certain paths connecting the endpoints of the original link.
      In particular, it reduces the short distance fluctuations of the quantum field without
      affecting its IR structure: indeed, the smearing transformation consists in adding irrelevant operators and their contributions vanish in the continuum limit. It is extremely
      useful to reduce the gauge noise of observables and many different algorithms exists. In
      this work, we will use two of them: the APE and the HYP smearings.
      Smearing can also be used on the fermionic field to increase the overlap of an interpolating operator with the ground state. In particular, in this work, the different
      operators used in the Generalized Eigenvalue Problem basis will usually correspond to
      different levels of Gaussian smearing applied to some local operator.
      2.6.1 APE smearing
      The APE smearing was introduced by the APE Collaboration [43], the idea is to
      replace each link variable Uµ(x) by a weighted average of this link and the surrounded
      staples
      Ueµ(x) = (1 − α)Uµ(x) + α
      6
      X
      ν6=µ
      Cµν(x), (2.
      40 CHAPTER 2. Computation of observables in lattice QCD
      Uµ(x)
      x x + aµˆ
      Figure 2.2 – Illustration of the four staples in an hyperplan containing the original link
      Uµ(x). The last two staples lie out of this hyperplan.
      where the staples Cµν(x) correspond to the six shortest paths starting from the point
      x and ending at the point x + aµˆ (see Figure 2.2). The transformation (2.33) does not
      belong to SU(3) and the new link variable has to be projected back to SU(3):
      U
      APE
      µ
      (x) = ProjSU(3) Ueµ(x). (2.34)
      Finally, this smearing procedure can be iterated several times.
      2.6.2 HYP smearing
      The HYP smearing (hypercubic smearing) [44] can be seen as a generalization of the
      APE smearing where fat links are now constructed from links which lie in hypercubes
      containing the original link. The smoothing procedure is done in three steps with
      coefficients (α1, α2, α3). In this work, it will be applied to the time-links of heavy-light
      correlation functions, in this case one has
      U
      HYP
      0
      (x) = ProjSU(3)  »
      (1 − α1)U0(x) + α1
      6
      X
      ±i6=0
      Vei;0(x)Ve0;i(x +ˆi)Ve†
      i;0(x + ˆ0)#
      ,
      where the decorated links Veµ,ν(x) are defined by
      Veµ;ν(x) = ProjSU(3)  »
      (1 − α2)Uµ(x) + α2
      4
      X
      ±ρ6=ν,µ
      V ρ;ν,µ(x)V µ;ρ,ν(x + ˆρ)V

      ρ;ν,µ(x + ˆµ)
      #
      ,
      and finally the decorated links V µ,ν(x) is defined by
      V µ;ν,ρ(x) = ProjSU(3)  »
      (1 − α3)Uµ(x) + α3
      2
      X
      ±η6=ρ,ν,µ
      Uη(x)Uµ(x + ˆη)U

      η
      (x + ˆµ)
      #
      .
      The optimal choice obtained in ref. [44] corresponds to the HYP1 action and is given by
      ~αHYP1 = (0.75, 0.6, 0.3). Another choice proposed in ref. [45] after minimizing the noise
      to signal ratio is called HYP2 and is given by ~αHYP2 = (1.0, 1.0, 0.5).
      2.7 Error estimation 41
      2.6.3 Gaussian smearing
      While APE and HYP smearings are applied to the gauge field and used to reduce
      the noise coming from short distance fluctuations, the Gaussian Smearing [46] is applied
      to the fermionic field and is defined by
      ψ
      (k)
      (x) = (1 + κG∆)nk ψ(x), (2.35)
      where ∆ is the 3-d Laplace operator defined in Appendix A, nk is the number of steps,
      and κG is the coupling strength of the nearest neighbors in space directions. Gaussian
      smearing is often combined with gauge link smearing where the Laplace operator is itself
      constructed from fat links. Intuitively, starting from a local source, the transformation
      (2.35) leads to a non local source with a gaussian distribution, the radius of the source
      rk = 2a

      κGnk increases with the number of iterations. Since mesons are extended
      objects, the smeared interpolating field ψ
      (k)
      is expected to have a better overlap with
      the ground state level as depicted in Figure 2.3.
      0.38
      0.4
      0.42
      0.44
      0.46
      0.48
      0.5
      0.52
      0.54
      0.56
      0.58
      2 4 6 8 10 12 14 16
      t/a
      nk
      = 22
      nk
      = 133
      nk
      = 338
      Figure 2.3 – Effective mass meff(t) = log(C(t)/C(t + a)) using heavy-light two-point
      correlation functions for the B meson computed with different levels of smearing. Here
      κG = 0.1 and nk = (33, 133, 338).
      2.7 Error estimation
      In a Monte Carlo simulation, the Markov chain has a finite size (typically of the
      order of 104
      ) and the same configurations are used to compute different observables
      which are therefore correlated. Moreover, since the Markov Process generates the new
      gauge configuration from the previous one, it also introduces autocorrelation. We would
      like to estimate the statistical error associated to an observable computed on the lattice
      (using eq. (2.6)) taking into account all correlations. I will briefly discuss two techniques
      used in this work. The first one is the Jackknife method, and is based on re-sampling
      methods. The second is the Gamma Method [47] where one tries to estimate the full
      autocorrelation matrix. Systematic errors are not considered here and will be the subject
      of the next section.
      42 CHAPTER 2. Computation of observables in lattice QCD
      In Lattice QCD, the primary observables are usually correlation functions. We label
      a set of P primary observables (with N measurements for each) by:

      n
      p
      | p = 0 · · · P ; n = 1 · · · N} . (2.36)
      2.7.1 The Jackknife Procedure
      The Jackknife procedure was originally introduced by Quenouille for bias reduction.
      Later Tukey noticed that the same technique turns out to be useful to estimate the
      variance. It has the advantage to be easily implemented and also very fast. For a review
      see [48].
      Mean value estimate
      The mean value αˆ of a primary observable is given by the following unbiased estimator
      αp =
      1
      N
      X
      N
      i=1
      α
      i
      p
      . (2.37)
      Then, for each secondary observable f, function of the primary observables αp, an
      estimator of the true mean ˆf = f(ˆα) is given by
      f = f(αp). (2.38)
      However, this estimator has generally a bias of order 1/N which can be corrected by the
      Jackknife procedure (formula (2.42)). However, since the statistical errors in the Monte
      Carlo simulation are of order 1/

      N, this bias can usually be safely neglected.
      To estimate the variance, one would naively use the following formula:
      σ
      2
      (f) = 1
      N(N − 1)
      X
      N
      i=1

      f(α
      i
      p
      ) − f
      2
      , (2.39)
      but f(α
      i
      p
      ) is generally a spread distribution, hf(α
      i
      p
      )i 6= ˆf, and the previous formula
      fails. Moreover it does not take into account autocorrelations. The blocking procedure
      described in the next section will address the second issue and the Jackknife resampling
      method will propose a solution to the first one.
      Blocking
      We divide our N measurements into NB blocks including B consecutive measurements (N = NB × B). The block average β
      b
      p
      of the primary observables p is then
      β
      b
      p =
      1
      B
      X
      B
      i=1
      α
      i+(b−1)B
      p
      , b = 1, · · · , NB . (2.40)
      If the block size is chosen to be larger than the autocorrelation time (N B τ ), the
      block variables can be considered as independent new variables characterized by their
      mean β
      b
      p
      and their variance. But, obviously, the mean and the variance are invariant
      under such blocking transformation. Therefore, the statistical error on the primary
      observables αp could be estimated via the naive estimator (2.39) using the block variables
      β
      b
      p
      . The problem appears when non-linear functions of the primary observables are
      considered since hf(β
      b
      p
      )i 6= ˆf. In this case, the Jackknife procedure can be used
      2.7 Error estimation 43
      Jackknife samples
      The Jackknife samples (bins) are defined by
      J
      b
      p =
      1
      N − B
      X
      N
      i=1
      α
      i
      p −
      X
      B
      i=1
      α
      i+(b−1)N
      p
      !
      =
      1
      N − B

      Nαp − Bβb
      p

      , (2.41)
      and correspond to the full sample where the block b has been deleted. Consequently,
      each jackknife block contains most of the information (especially when B = 1, the
      one-deleted Jackknife) and are clearly not independent.
      From the Jackknife sample, the bias of order 1/N in (2.38) can be corrected by
      considering
      f J = f − (NB − 1)
      f 0 − f

      , f 0 =
      1
      NB
      X
      NB
      n=1
      f(J
      b
      p
      ). (2.42)
      Error estimate
      Finally, an unbiased estimator of the variance for a secondary variable is given by
      the Jackknife variance (see ref. [49] for a proof),
      σ
      2
      J
      (f) = NB − 1
      NB
      X
      NB
      b=1

      f(J
      b
      p
      ) − f 0
      2
      , (2.43)
      where the pre-factor NB−1
      NB
      corrects the fact that our variables are not independent but
      correspond to a resampling of the original one. In eq. (2.43) the mean estimate f could
      also be used instead of f 0
      . In practice, to check the reliability of the result, we can check
      that the result does not depend on the block size B which should be chosen larger than
      the autocorrelation time. Finally, using the Jackknife procedure to propagate errors
      has the advantage to take into account cross-correlations automatically, contrary to the
      standard propagation of errors where they must be added explicitly.
      2.7.2 The Gamma method
      The Γ-method is described in details in ref. [47] and I just recall the main formulae.
      The central point is the estimation of the full autocorrelation matrix
      Γnm(t) = 1
      N − t
      X
      N−t
      i=1

      α
      i
      n − αn
      α
      i+t
      m − αm

      , (2.44)
      for times t N, in terms of the primary observables αn. To estimate the error associated
      to a secondary observable f, which depends on the primary observables αn, we first
      evaluate the projected autocorrelation function defined by
      Γf (t) = X
      n,m
      fnfmΓnm(t) , fn =
      ∂f
      ∂αn
      (αn), (2.45)
      where fn is the partial derivative of f with respect to αn and evaluated at the central
      value αn. In practice, the derivatives are computed numerically. In particular, Γ
      44 CHAPTER 2. Computation of observables in lattice QCD
      corresponds to the variance of f neglecting the autocorrelation. Finally, we can define
      the integrated autocorrelation time by
      τint,f (W) = 1
      2
      +
      X
      W
      t=1
      ρf (t) , ρf (t) = Γf (t)
      Γf (0) , (2.46)
      where W is a cutoff (summation window) needed due to the finite size of the Markov
      chain. Furthermore, since the noise of the autocorrelation function is roughly constant
      in time, the signal is dominated by noise at large time. The statistical error of the
      observable f from N measurements is finally given by
      σ
      2
      Γ,f =
      Γf (0)
      N
      × 2 τint,f (W). (2.47)
      In the case where autocorrelation is absent, we have τint,f = 1/2 and one recovers the
      expected estimator for the variance. The value of the cutoff W should be large enough
      so that the remaining part in eq. (2.46) is indeed small, but not too large to include only
      terms with negligible noise. In ref. [47], the author proposed an automatic procedure
      for searching the window W and a typical example is given in Figure 2.4. However,
      neglecting the tail of the autocorrelation function leads to an underestimation of τint
      and, therefore, of the statistical error.
      -0.05
      0
      0.05
      0.1
      0.15
      0.2
      0.25
      0.3
      0 50 100 150 200 250 300 350
      ρ F(t)
      t
      W
      Figure 2.4 – Typical example for the determination of the windows.
      Therefore, an improved estimator for τint,f was proposed in ref. [50] which takes into
      account the tail of the autocorrelation matrix. This critical slowing down is due to
      the presence of slow modes in the Monte Carlo transition matrix and the associated
      characteristic time, τexp, depends on the algorithm. Each observable couples differently
      to these slow modes and, when this coupling is small, the tail of the autocorrelation
      function is difficult to estimate. In the aforementioned reference, the author gives an
      upper bound for the neglected part in eq. (2.46) which corresponds to τexp ρf (W) and
      then can be used to obtain a more conservative estimate of the error. Since the topological charge is particularly sensitive to the slow modes, it is one of the most popular
      quantities used to estimate τexp.
      Once τexp is approximately known, the idea is to choose a second window Wu, where
      the signal differs significantly from zero, and to estimate the remaining part in eq. (2.46)
      by ρf (t) ≈ ρ(Wu) e
      −(t−Wu)/τexp for t > Wu. Then, one obtains
      τ
      (2)
      int,f (Wu) = τint,f (Wu) + τexpρ(Wu), (2.48)
      2.8 Setting the scale and the continuum limit 45
      where the first part is computed explicitly in the region where it is rather well determined
      by using eq. (2.46) and the second part is an estimation of the contribution of the tail.
      The statistical error is now given by
      σ
      2
      Γ,f =
      Γf (0)
      N
      × 2 τ
      (2)
      int,f (Wu). (2.49)
      An illustration of the window procedure is given in Figure 2.5.
      -0.05
      0
      0.05
      0.1
      0.15
      0.2
      0.25
      0.3
      0 50 100 150 200 250 300 350
      ρ F(t)
      t
      Wu
      Figure 2.5 – Improved estimator for the integrated autocorrelation time.
      2.8 Setting the scale and the continuum limit
      In the first chapter, the action was formulated in terms of dimensionless quantities parametrized by the bare coupling constant g0 and the bare quark masses mi (or,
      equivalently, by β and the hopping parameters κi). In the case of Nf = 2 simulations,
      where only two degenerate dynamical quarks are considered, we are left with two free
      parameters (β, κ). The first one sets the global scale of the simulation and the second
      one is used to tune the quark mass.
      Setting the scale
      Any observable is obtained in lattice units and, to compare the result with experiment, it is convenient to convert it in physical units. This step, called setting the scale,
      consists in computing the lattice spacing in physical units by imposing one observable,
      computed on the lattice, to match its physical value. Setting the scale and adjusting
      the quark masses is a coupled problem. Therefore, to set the scale one usually chooses
      a physical observable A which depends weakly on the quark masses so that the two
      steps can be considered as independent. The scale is then obtained by imposing the
      condition 1
      a[MeV−1
      ] = (aA)lat
      Aexp[MeV] ,
      where (aA)lat is the value of the observable computed on the lattice and Aexp is its
      physical value in MeV. Typical observables are the omega baryon mass [51], or the pion
      and kaon decay constants fπ, fK [52]. The observable should be chosen with care: beside
      the fact that it should not depend too much on the quark masses, it should also be easily
      1. The conversion factor between fm and MeV is 1 fm−1 = 197.327 MeV
      46 CHAPTER 2. Computation of observables in lattice QCD
      computed on the lattice with a small statistical error to allow for a precise estimation.
      The systematic errors should also be well under control: in particular, the mass of the
      ρ meson is not an optimal choice since it corresponds to a resonance. Finally, the error
      on the scale will affect all quantities expressed in physical units but also the continuum
      and chiral extrapolations (see Section 2.9).
      The quark masses are determined in a second step. In this work, up and down
      quarks are assumed to be degenerate and their mass can be set by computing just one
      observable, like the pion mass. First, the pion mass is computed in lattice units (amπ)lat,
      then the result is converted in physical units using the previous estimation of the lattice
      spacing:
      mπ[MeV] = (amπ)lat
      a[MeV−1
      ]
      .
      There is an ambiguity in setting the scale at finite lattice spacing due to discretization
      errors, but this ambiguity should vanish in the continuum limit and does not affect the
      results extrapolated to a → 0. Nevertheless, since we work with Nf = 2 dynamical
      quarks, an ambiguity arises from the choice of observables used to match the theory
      with experiment.
      The continuum limit
      Lattice QCD offers a natural regularization of the theory both in the infrared (IR)
      and in the ultraviolet (UV) regimes (via the lattice spacing a and the spatial extent L
      of the lattice). To compare the results with experiment, we would like to remove both
      cut-offs. Neglecting volume effects, this is performed by taking the limit a → 0 at fixed
      physical volume (corresponding to larger and larger lattice resolutions L/a).
      2.9 Discussion of systematic errors
      A typical lattice simulation is performed in a physical volume of a few fermi (L ∼
      3 fm) and at lattice spacing of the order a ∼ 0.06 fm corresponding to lattice resolutions
      L/a ∼ 50. In this work, we also work at unphysical quark masses where the pion
      mass lies in the range [190 − 450] MeV. Therefore, many systematic errors have to be
      considered.
      Discretization effects
      Due to the finite lattice spacing a, one expects discretization errors linear in the
      lattice spacing. However, improved actions and operators can be used to cancel O(a)
      artifacts. In the case of Wilson fermions, this is done by adding the Clover term (1.30)
      in the action and higher-dimensional counterterms to the currents of interest. The
      theory is then called O(a)-improved and the first corrections for on-shell quantities
      are quadratic in the lattice spacing. To evaluate discretization errors, we can perform
      several simulations, at different values of the lattice spacing a, and then extrapolate to
      the continuum limit. To keep the physical volume V constant, the lattice resolution
      L/a has to be increased and the numerical cost of the simulations grows. Therefore,
      O(a)-improvement can help to reduce the range over which the lattice spacing should
      vary.
      2.9 Discussion of systematic errors 47
      Volume effects
      This source of systematic errors is due to the finite size of the lattice: due to periodic boundary conditions, virtual pions can travel around the lattice. The associated
      corrections O(e
      −mπL
      ) were computed in ref. [53] and decrease exponentially with the
      volume. The CLS ensembles used in this work fulfills the criterion Lmπ > 4 and volume
      effects are expected to be very small. Therefore, we will not perform any infinite volume
      extrapolation.
      Dynamical quarks
      Evaluating the quark propagator on the lattice becomes more and more difficult
      as the pion mass gets closer to its physical value. Therefore, many lattice simulations
      are performed at non-physical quark masses. To estimate the associated systematic
      error, different simulations at several quark masses are performed and the results are
      extrapolated to the chiral limit using fit formulae inspired from chiral perturbation
      theory [54, 55]. A second source of systematic errors comes from the fact that only two
      dynamical quarks are used in the simulations (quark loops with c, s, b and t quarks are
      neglected) and the associated error is more difficult to estimate.

Vous lisez 2 fils de discussion
Répondre à : Répondre #56119 dans Est ce que les fous ont le droit de voter?
Vos informations :




Annuler
Back To Top