Publication | Closed Access
Evaluating Large Language Models in Class-Level Code Generation
97
Citations
47
References
2024
Year
Unknown Venue
Llm Fine-tuningEngineeringSoftware EngineeringSoftware AnalysisLarge Language ModelsNatural Language ProcessingComputational LinguisticsLanguage StudiesMachine TranslationCode GenerationComputer ScienceDifferent LlmsCode RepresentationSoftware DesignProgram AnalysisSoftware TestingCode Generation BenchmarksLinguisticsSoftware Language EngineeringLanguage Generation
Recently, many large language models (LLMs) have been proposed, showing advanced proficiency in code generation. Meanwhile, many efforts have been dedicated to evaluating LLMs on code generation benchmarks such as HumanEval. Although being very helpful for comparing different LLMs, existing evaluation focuses on a simple code generation scenario (i.e., function-level or statement-level code generation), which mainly asks LLMs to generate one single code unit (e.g., a function or a statement) for the given natural language description. Such evaluation focuses on generating independent and often small-scale code units, thus leaving it unclear how LLMs perform in real-world software development scenarios.
| Year | Citations | |
|---|---|---|
Page 1
Page 1