Download the relevant Faculty Council submission

Guidelines for Students and Instructors on the Use of AI

The Faculty supports both instructors and students in the use of generative artificial intelligence (AI). Generative AI is the most influential IT innovation of the past decade, making engineering work more efficient. It is important that both students and instructors become experts in this technology: able to use it proficiently, to understand its strengths and the problems that may arise from its use. We encourage all members of the Faculty to take full advantage of the capabilities of generative AI to make learning and teaching more effective, within the framework of ethical application.

On this page, we intend to provide guidance on judging the use of AI and defining the framework of ethical application.

Table of Contents:

Download student declaration on the use of generative AI

Example completion of the generative AI student declaration

What is generative artificial intelligence?

Generative AI is a technology that, using tools of machine learning (typically neural networks, large language models), is capable of producing or processing new content (text, image, sound). These models are trained on large amounts of data, typically publicly available on the internet, and the contents they generate follow the structure and semantics of this training set.

Modes of use

From the perspective of ethical use, it is crucial how a generative AI model is applied: either as an assistant, which does not replace but supports the user’s work, or as a substitute for expertise, delegating tasks that require the user’s own competence to the model. The principles of ethical use differ for these two modes. Based on the mode of use, generative AI tools can be grouped as follows (though the boundaries are not strict):

  • Directive use: the model generates a full answer (paragraphs, even full program code) to a question posed by the user in natural language. ChatGPT is an example. In this case, the interaction with the model is initiated directly by the user.
  • Assistive use: the model offers shorter (sometimes longer) continuations to a sentence or piece of code started by the user. This application does not require explicit prompting. The system automatically derives the input to the model based on a part of the document or code previously written by the user, making these models a variant of predictive text input. Here, the user retains much greater control over the evolution of the document or code, and generative AI functions as an assistant. GitHub Copilot is an example.

Dangers and responsibility

Many major IT companies provide access to generative AI models as free or paid services. While recognizing all the advantages of the technology, users must consider the following risks:

  • Hallucination: model responses can be convincing even when factually incorrect. Verifying the truth of answers is therefore essential in every case.
  • Bias: model responses depend on the data used for training. If the training data was unbalanced or incomplete in some respect, the system’s responses will reflect that.

Currently, no generative AI service takes responsibility for the factual accuracy of generated content, and they typically prohibit citing model outputs as factual truth. The user must verify the model’s response, and the user is always responsible for its truthfulness.

The Ethical Use of Generative Artificial Intelligence

Principles

Generative AI can be of great help to everyone in acquiring and expanding professional knowledge, summarizing certain areas or topics, and filling knowledge gaps. These all count as ethical uses of generative AI. However, it must be considered that model responses can never serve as references (e.g., in student complaints); verifying their accuracy is, of course, the responsibility of the user (student or instructor).

The use of assistive tools (e.g., GitHub Copilot, also in programming) is by default (unless explicitly prohibited) considered ethical, does not require special marking or citation, but the author remains responsible for the correctness of the generated content.

It is also considered ethical to use generative AI tools to transform or improve user-produced (own) content: spell-checking, translation, rephrasing, summarization, or even generating paragraphs of text, provided that all professional content and factual data were supplied by the user and included in the prompt. It is very important that AI use is only ethical if all professional content comes from the user and the AI is used solely for formal and presentational purposes. In such cases, the generated text does not need to indicate AI use.

It is unethical to adopt generated content that requires the user’s professional competence or contains new information not included in the prompt, without citation or indication. In other words, it is unethical if someone is tasked with work requiring their expertise and they delegate it entirely to a generative AI model without acknowledgment.

For students, for example, in the case of assignments, analyses, or solutions, AI use must be indicated in a declaration and/or by citation (details explained later). Without this, AI use is considered unethical.

For instructors, for example, generating reviews, evaluations, or expert opinions is unethical if the professional content, statements, or opinions come from AI. If, however, these were included in the prompt (e.g., as bullet points) and the model’s task was limited to formatting them into continuous text, this can be considered ethical use.

Typical applications

Literature research

Generative AI is also excellent for literature research, as it can be queried for typical solutions and current results in a field. However, interpretation of the answers requires caution: the output may be limited to the database on which the model was trained (in addition to the prompt). The results must always be verified, and the underlying scientific sources must be located. This step is essential, since some tools (such as ChatGPT) often provide false (nonexistent or irrelevant) references. If, after proper verification, someone writes their own text based on literature research supported by AI, it is not necessary to cite AI in the text, but in the declaration (if required by the course), it must be indicated in the relevant section.

Click here for an example of this application (in Hungarian).

Program code generation

Several AI-based tools exist for generating program code.

  • Through a directive approach (e.g., ChatGPT), full source codes and algorithms can be generated. Their use must be explicitly authorized by the course instructor; generated code must be marked, and listed in the declaration.
  • Assistive, code-prediction-based, programming support (assistant) models can be used freely without citation or marking.

Click here for an example of this application (in Hungarian).

Creating new ideas and solution proposals

Generative AI models can also be used for inspiration, gathering ideas and proposals. It must always be checked whether the generated idea is truly new. It is likely connected to existing work, which must be cited appropriately. It is important to remember that the model’s output is not a fully reliable source of information. The output may be incorrect or incomplete, as the model’s knowledge is limited by the databases used for training and querying. The output may also be outdated. Regarding AI use, no citation is required in the text, but it must be indicated in the declaration (if required by the course).

Click here for an example of this application (in Hungarian).

Creating an outline (text structure, bullet points)

A common way to use generative AI tools is to create the structure of documentation or presentations, bullet points, or main ideas. This can be a significant help in writing documents, but besides correctness, the level of detail also needs to be checked. The typical 5–7–10 points generated by models may be appropriate, but may also be too many or too few. It is necessary to examine which points are essential for the task; one should not necessarily stick to the model’s output. No citation is required in the text, but it must be indicated in the declaration (if required by the course).

Click here for an example of this application (in Hungarian).

Creating text blocks

For creating text blocks, the principles described above apply: if the output contains new information beyond what was included in the prompt, it requires citation. Otherwise, citation is not needed, but the use of AI for generating text blocks must be indicated in the declaration (if required by the course).

Click here for an example of this application (in Hungarian).

Generating images for illustration

For AI-generated images, it must always be indicated that the image was generated and by which application (following the citation rules detailed below). If the image generation is based on an existing photo, recording, or artwork, that must also be cited separately.

Click here for an example of this application (in Hungarian).

Generating data visualizations, graphs based on data points

For graphs generated by generative AI, citation is mandatory, and their use must also be indicated in the declaration with identification of the relevant document sections.

Click here for an example of this application (in Hungarian).

Creating presentations

For AI-generated presentations, the same rules apply as described for generating text, images, and visualizations. The correctness of the content must be verified by the user.

Click here for an example of this application (in Hungarian).

For instructors

The guided use of generative AI can assist instructors in creating teaching materials (information gathering, summarizing, preparing slide outlines, generating exercises and examples, etc.). If the use of generative AI was assistive in nature and the materials as a whole reflect the instructor’s intentions and goals, this is considered ethical use even if the instructor does not indicate the use of the model in the produced content. Ensuring the factual and professional correctness of teaching materials remains the instructor’s responsibility.

It is also considered ethical use for instructors to use generative AI to evaluate student assignments and other assessments (in the way detailed later), but this must be communicated to the students.

Rethinking course objectives

The value of lexical knowledge has decreased since the spread of the internet, and with the emergence of generative AI, it has practically disappeared. During assessments, instead of (or in addition to) lexical knowledge, it is advisable to focus on understanding course interconnections and engineering problem-solving. This has always been a guiding principle, but in the future, it must be emphasized even more.

Rethinking course assessments

  • Any assessment that generative AI can fully solve needs reconsideration.
  • Evaluation should shift from outcome-centered to process-centered. If the number of students/the nature of the course allows, it is recommended to collect weekly progress from students (possibly introducing the use of version control such as git) and review it during evaluation. Linear, error-free progress in engineering fields is unrealistic and may indicate that the solution is not the student’s own work.
  • Where the number of students and instructor capacity allows, it is advisable to ask supplementary questions about the solution process (which part was hardest, which lecture material was used, what was learned, etc.).
  • If the course allows, assignments may also explicitly require the use of generative AI (e.g., developing a topic using generative AI and critically evaluating the model’s answers).

Evaluating assessments

In some courses, generative AI can assist in evaluating assessments. This is considered ethical use if the following principles are respected:

  • When evaluating a student’s assignment with AI, personal data (name, Neptun code, etc.) may only be included in the prompt if the model runs locally or is operated by a service provider deemed acceptable by the university from a data protection perspective. (For online services, there is no guarantee that companies do not use the data for model refinement.)
  • The model’s result must always be reviewed by a human, and corrections made if necessary.
  • In case of student complaints, a generative AI model’s response cannot be used as a reference.
  • The responsibility for evaluation remains with the instructor. This is especially important when an AI-generated thesis/dissertation review fails to point out major professional errors and deficiencies.

An assignment must not only be checked professionally, but it must also be verified whether the student used generative AI (if its use was not allowed in the course):

  • Various online tools exist to determine whether a document was generated by AI. Since their reliability is low, they cannot be used as grounds for rejecting assignments.
  • If the instructor has doubts, the origin of the assignment can be checked orally. This option has always existed, but students should be informed of it by the course coordinator or supervisor.

Thesis and dissertation submission

In the case of a thesis/dissertation, it is the supervisor’s responsibility to check the origin of the submitted work. This includes checking for 1) possible plagiarism and 2) the extent of AI use. The student’s completed declaration provides a reference for the latter. The supervisor must decide whether the submitted work contains sufficient results that can clearly be attributed to the student, allowing the awarding of the degree. No clear guidance can be given here, as the acceptability of the same level of AI use depends on the topic:

  • If the task was to develop software and most of it was produced with generative AI, this is not acceptable.
  • If the task was to develop an algorithm/procedure/comparative analysis, and AI-supported software was produced to support this, this counts as acceptable AI use. AI was designed for this purpose: to make engineering work more efficient and enable solving more complex tasks by automating mechanical, less challenging but time-consuming parts.

If the supervisor considers the use of generative AI disproportionately large, or affecting essential parts of the assignment, the submission will not be approved.

The reviewer does not need to evaluate the AI use when assessing the thesis/dissertation.

For students

Generative AI can greatly help students in acquiring professional knowledge, summarizing what they have learned, and filling gaps in their knowledge. These all count as ethical uses of generative AI. Model responses can never serve as references (e.g., in student complaints); verifying their accuracy is, of course, the student’s responsibility.

If students are unsure whether using generative AI for a particular purpose is allowed in an assignment, they should always consult the course coordinator or supervisor.

If the course syllabus requires it, students must declare the use of generative AI in connection with assignments, and the declaration must cover all uses, however minor. For theses and dissertations, the declaration is always required, and the supervisor cannot waive it. (An example of the declaration is available here.)

A generative AI’s (professional) answer may appear in the assignment, but it must be properly cited or indicated in the declaration. If it is established that generative AI was used in an unauthorized way, it counts as plagiarism and carries the same consequences.

Citing content created with generative AI

Including content generated by AI in assignments is allowed, if not explicitly prohibited by the course coordinator. Such content must always be indicated.

Citing information and conclusions from AI responses

If text produced by generative AI contains factual information or conclusions not included in the prompt, it must be cited like other literature sources. There are several citation guidelines; the Faculty requires following the “MLA” guideline. This guideline treats generated content as a source without an author. Next to the information or conclusion, the corresponding reference must be cited, and the bibliography must include:

  • A brief, one-sentence summary of the prompt
  • The model name with version number
  • The company producing the model
  • The query date
  • If possible, a shared link to the conversation

Example:

[1] “Examples of harm-reduction initiatives” prompt. ChatGPT-4o, OpenAI, February 10, 2025.

Documenting AI use

If required by the course syllabus, assignments must include an appendix documenting AI use. This declaration includes a table specifying which AI tools were used for what purpose, and which parts of the document were affected to what extent. The declaration must be signed by the student. (An example of the declaration is available here.)

AI use must also be indicated by instructors if generated content includes information, statements, or opinions produced by the model (i.e., not part of the prompt). There is no template for this declaration; instructors must find a way to indicate AI use (e.g., in footnotes or embedded in the text).

Collection of generative AI tools

General-purpose generative AI tools

The following table is, of course, not a complete list, but contains the most important tools:

Service Company Availability
ChatGPT OpenAI (USA) https://chatgpt.com
Gemini Google (USA) https://gemini.google.com
Claude Anthropic (USA) https://claude.ai
Copilot Microsoft (USA) https://copilot.microsoft.com
Mistral Mistral AI (France) https://chat.mistral.ai/chat
DeepSeek DeepSeek (China) https://www.deepseek.com

These are excellent for content creation, code generation, learning, and summarizing text content. Sometimes they provide incorrect or outdated information and may generate overly lengthy answers. Some tools are multimodal, capable of processing image and text simultaneously.

All BME instructors and students have access to Microsoft 365 Copilot services under the BME Microsoft Campus License. To use them, log in with a vik.bme.hu or edu.bme.hu email address.

Tools for image generation

Service Company Availability
DALL·E OpenAI https://openai.com/index/dall-e-3
Midjourney Midjourney https://www.midjourney.com
Stable Diffusion Stability AI https://stability.ai/stable-image

(mostly require registration and/or subscription)

Tools for code generation

Most general-purpose LLMs perform well in coding tasks. The following (not complete) list includes specialized coding assistants that integrate into development environments:

Service Availability
GitHub Copilot https://github.com/features/copilot
Jetbrains AI assistant https://www.jetbrains.com/ai
Amazon Q Developer https://aws.amazon.com/q/developer
Tabnine https://www.tabnine.com

It is important to know that the Pro subscription version of GitHub Copilot is free for university students and instructors. Details on access can be found on the service’s website.