- Document a few more properties and methods
- Add more information to many classes
- Fix lots of typos and gramar mistakes
- Use [code] tags for parameters consistently
- Use [b] and [i] tags consistently
- Put "Warning:" and "Note:" on their own line to be more visible,
and make them always bold
- Tweak formatting in code examples to be more readable
- Use double quotes consistently
- Add more links to third-party technologies
The ARVR Anchor point is a spatial node that maps a real world location identified by the AR platform to a position within the game world. For example, as long as plane detection in ARKit is on, ARKit will identify and update the position of planes (tables, floors, etc) and create anchors for them.
This node is mapped to one of the anchors through its unique id. When you receive a signal that a new anchor is available, you should add this node to your scene for that anchor. You can predefine nodes and set the id and the nodes will simply remain on 0,0,0 until a plane is recognized.
This node is mapped to one of the anchors through its unique ID. When you receive a signal that a new anchor is available, you should add this node to your scene for that anchor. You can predefine nodes and set the ID; the nodes will simply remain on 0,0,0 until a plane is recognized.
Keep in mind that, as long as plane detection is enabled, the size, placing and orientation of an anchor will be updated as the detection logic learns more about the real world out there especially if only part of the surface is in view.
</description>
<tutorials>
@ -22,14 +22,14 @@
<returntype="bool">
</return>
<description>
Returns [code]true[/code] if the anchor is being tracked and [code]false[/code] if no anchor with this id is currently known.
Returns [code]true[/code] if the anchor is being tracked and [code]false[/code] if no anchor with this ID is currently known.
</description>
</method>
<methodname="get_mesh"qualifiers="const">
<returntype="Mesh">
</return>
<description>
If provided by the ARVR Interface this returns a mesh object for the anchor. For an anchor this can be a shape related to the object being tracked or it can be a mesh that provides topology related to the anchor and can be used to create shadows/reflections on surfaces or for generating collision shapes.
If provided by the ARVR Interface, this returns a mesh object for the anchor. For an anchor, this can be a shape related to the object being tracked or it can be a mesh that provides topology related to the anchor and can be used to create shadows/reflections on surfaces or for generating collision shapes.
The anchor's id. You can set this before the anchor itself exists. The first anchor gets an id of [code]1[/code], the second an id of [code]2[/code], etc. When anchors get removed, the engine can then assign the corresponding id to new anchors. The most common situation where anchors 'disappear' is when the AR server identifies that two anchors represent different parts of the same plane and merges them.
The anchor's ID. You can set this before the anchor itself exists. The first anchor gets an ID of [code]1[/code], the second an ID of [code]2[/code], etc. When anchors get removed, the engine can then assign the corresponding ID to new anchors. The most common situation where anchors "disappear" is when the AR server identifies that two anchors represent different parts of the same plane and merges them.
</member>
</members>
<signals>
@ -57,7 +57,7 @@
<argumentindex="0"name="mesh"type="Mesh">
</argument>
<description>
Emitted when the mesh associated with the anchor changes or when one becomes available. This is especially important for topology that is constantly being mesh_updated.
Emitted when the mesh associated with the anchor changes or when one becomes available. This is especially important for topology that is constantly being [code]mesh_updated[/code].
A spatial node representing a spatiallytracked controller.
A spatial node representing a spatially-tracked controller.
</brief_description>
<description>
This is a helper spatial node that is linked to the tracking of controllers. It also offers several handy passthroughs to the state of buttons and such on the controllers.
Controllers are linked by their id. You can create controller nodes before the controllers are available. Say your game always uses two controllers (one for each hand) you can predefine the controllers with id 1 and 2 and they will become active as soon as the controllers are identified. If you expect additional controllers to be used, you should react to the signals and add ARVRController nodes to your scene.
The position of the controller node is automatically updated by the ARVRServer. This makes this node ideal to add child nodes to visualise the controller.
Controllers are linked by their ID. You can create controller nodes before the controllers are available. If your game always uses two controllers (one for each hand), you can predefine the controllers with ID 1 and 2; they will become active as soon as the controllers are identified. If you expect additional controllers to be used, you should react to the signals and add ARVRController nodes to your scene.
The position of the controller node is automatically updated by the [ARVRServer]. This makes this node ideal to add child nodes to visualize the controller.
Returns the hand holding this controller, if known. See TRACKER_* constants in [ARVRPositionalTracker].
Returns the hand holding this controller, if known. See [code]TRACKER_*[/code] constants in [ARVRPositionalTracker].
</description>
</method>
<methodname="get_is_active"qualifiers="const">
@ -45,14 +45,14 @@
<returntype="int">
</return>
<description>
Returns the ID of the joystick object bound to this. Every controller tracked by the ARVRServer that has buttons and axis will also be registered as a joystick within Godot. This means that all the normal joystick tracking and input mapping will work for buttons and axis found on the AR/VR controllers. This ID is purely offered as information so you can link up the controller with its joystick entry.
Returns the ID of the joystick object bound to this. Every controller tracked by the [ARVRServer] that has buttons and axis will also be registered as a joystick within Godot. This means that all the normal joystick tracking and input mapping will work for buttons and axis found on the AR/VR controllers. This ID is purely offered as information so you can link up the controller with its joystick entry.
</description>
</method>
<methodname="get_mesh"qualifiers="const">
<returntype="Mesh">
</return>
<description>
If provided by the ARVRInterface this returns a mesh associated with the controller. This can be used to visualise the controller.
If provided by the [ARVRInterface], this returns a mesh associated with the controller. This can be used to visualize the controller.
A controller id of 0 is unbound and will always result in an inactive node. Controller id 1 is reserved for the first controller that identifies itself as the left hand controller and id 2 is reserved for the first controller that identifies itself as the right hand controller.
For any other controller that the [ARVRServer] detects, we continue with controller id 3.
When a controller is turned off, its slot is freed. This ensures controllers will keep the same id even when controllers with lower ids are turned off.
The controller's ID.
A controller ID of 0 is unbound and will always result in an inactive node. Controller ID 1 is reserved for the first controller that identifies itself as the left-hand controller and ID 2 is reserved for the first controller that identifies itself as the right-hand controller.
For any other controller that the [ARVRServer] detects, we continue with controller ID 3.
When a controller is turned off, its slot is freed. This ensures controllers will keep the same ID even when controllers with lower IDs are turned off.
The degree to which the tracker rumbles. Ranges from [code]0.0[/code] to [code]1.0[/code] with precision [code].01[/code]. If changed, updates [member ARVRPositionalTracker.rumble] accordingly.
This class needs to be implemented to make an AR or VR platform available to Godot and these should be implemented as C++ modules or GDNative modules (note that for GDNative the subclass ARVRScriptInterface should be used). Part of the interface is exposed to GDScript so you can detect, enable and configure an AR or VR platform.
Interfaces should be written in such a way that simply enabling them will give us a working setup. You can query the available interfaces through ARVRServer.
Interfaces should be written in such a way that simply enabling them will give us a working setup. You can query the available interfaces through [ARVRServer].
</description>
<tutorials>
</tutorials>
@ -14,7 +14,7 @@
<returntype="int">
</return>
<description>
If this is an AR interface that requires displaying a camera feed as the background, this method returns the feed id in the [CameraServer] for this interface.
If this is an AR interface that requires displaying a camera feed as the background, this method returns the feed ID in the [CameraServer] for this interface.
</description>
</method>
<methodname="get_capabilities"qualifiers="const">
@ -51,9 +51,9 @@
<description>
Call this to initialize this interface. The first interface that is initialized is identified as the primary interface and it will be used for rendering output.
After initializing the interface you want to use you then need to enable the AR/VR mode of a viewport and rendering should commence.
Note that you must enable the AR/VR mode on the main viewport for any device that uses the main output of Godot such as for mobile VR.
If you do this for a platform that handles its own output (such as OpenVR) Godot will show just one eye without distortion on screen. Alternatively you can add a separate viewport node to your scene and enable AR/VR on that viewport and it will be used to output to the HMD leaving you free to do anything you like in the main window such as using a separate camera as a spectator camera or render out something completely different.
While currently not used you can activate additional interfaces, you may wish to do this if you want to track controllers from other platforms. However at this point in time only one interface can render to an HMD.
[b]Note:[/b] You must enable the AR/VR mode on the main viewport for any device that uses the main output of Godot such as for mobile VR.
If you do this for a platform that handles its own output (such as OpenVR) Godot will show just one eye without distortion on screen. Alternatively, you can add a separate viewport node to your scene and enable AR/VR on that viewport and it will be used to output to the HMD leaving you free to do anything you like in the main window such as using a separate camera as a spectator camera or render out something completely different.
While currently not used you can activate additional interfaces, you may wish to do this if you want to track controllers from other platforms. However, at this point in time only one interface can render to an HMD.
</description>
</method>
<methodname="is_stereo">
@ -96,7 +96,7 @@
This interface support AR (video background and real world tracking).
This interface outputs to an external device, if the main viewport is used the on screen output is an unmodified buffer of either the left or right eye (stretched if the viewport size is not changed to the same aspect ratio of get_render_targetsize. Using a separate viewport node frees up the main viewport for other purposes.
This interface outputs to an external device, if the main viewport is used the on screen output is an unmodified buffer of either the left or right eye (stretched if the viewport size is not changed to the same aspect ratio of [method get_render_targetsize]). Using a separate viewport node frees up the main viewport for other purposes.
</constant>
<constantname="EYE_MONO"value="0"enum="Eyes">
Mono output, this is mostly used internally when retrieving positioning information for our camera node or when stereo scopic rendering is not supported.
This is a special node within the AR/VR system that maps the physical location of the center of our tracking space to the virtual location within our game world.
There should be only one of these nodes in your scene and you must have one. All the ARVRCamera, ARVRController and ARVRAnchor nodes should be direct children of this node for spatial tracking to work correctly.
It is the position of this node that you update when your character needs to move through your game world while we're not moving in the real world. Movement in the real world is always in relation to this origin point.
So say that your character is driving a car, the ARVROrigin node should be a child node of this car. If you implement a teleport system to move your character, you change the position of this node. Etc.
For example, if your character is driving a car, the ARVROrigin node should be a child node of this car. Or, if you're implementing a teleport system to move your character, you should change the position of this node.
An instance of this object represents a device that is tracked such as a controller or anchor point. HMDs aren't represented here as they are fully handled internally.
As controllers are turned on and the AR/VR interface detects them instances of this object are automatically added to this list of active tracking objects accessible through the ARVRServer
The ARVRController and ARVRAnchor both consume objects of this type and should be the objects you use in game. The positional trackers are just the under the hood objects that make this all work and are mostly exposed so GDNative based interfaces can interact with them.
An instance of this object represents a device that is tracked, such as a controller or anchor point. HMDs aren't represented here as they are handled internally.
As controllers are turned on and the AR/VR interface detects them, instances of this object are automatically added to this list of active tracking objects accessible through the [ARVRServer].
The [ARVRController] and [ARVRAnchor] both consume objects of this type and should be used in your project. The positional trackers are just under-the-hood objects that make this all work. These are mostly exposed so that GDNative-based interfaces can interact with them.
The AR/VR Server is the heart of our AR/VR solution and handles all the processing.
The AR/VR server is the heart of our AR/VR solution and handles all the processing.
</description>
<tutorials>
</tutorials>
@ -17,12 +17,12 @@
<argumentindex="1"name="keep_height"type="bool">
</argument>
<description>
This is a really important function to understand correctly. AR and VR platforms all handle positioning slightly differently.
For platforms that do not offer spatial tracking our origin point (0,0,0) is the location of our HMD but you have little control over the direction the player is facing in the real world.
For platforms that do offer spatial tracking our origin point depends very much on the system. For OpenVR our origin point is usually the center of the tracking space, on the ground. For other platforms its often the location of the tracking camera.
This method allows you to center our tracker on the location of the HMD, it will take the current location of the HMD and use that to adjust all our tracking data in essence realigning the real world to your players current position in your game world.
For this method to produce usable results tracking information should be available and this often takes a few frames after starting your game.
You should call this method after a few seconds have passed, when the user requests a realignment of the display holding a designated button on a controller for a short period of time, and when implementing a teleport mechanism.
This is an important function to understand correctly. AR and VR platforms all handle positioning slightly differently.
For platforms that do not offer spatial tracking, our origin point (0,0,0) is the location of our HMD, but you have little control over the direction the player is facing in the real world.
For platforms that do offer spatial tracking, our origin point depends very much on the system. For OpenVR, our origin point is usually the center of the tracking space, on the ground. For other platforms, it's often the location of the tracking camera.
This method allows you to center your tracker on the location of the HMD. It will take the current location of the HMD and use that to adjust all your tracking data; in essence, realigning the real world to your player's current position in the game world.
For this method to produce usable results, tracking information must be available. This often takes a few frames after starting your game.
You should call this method after a few seconds have passed. For instance, when the user requests a realignment of the display holding a designated button on a controller for a short period of time, or when implementing a teleport mechanism.
</description>
</method>
<methodname="find_interface"qualifiers="const">
@ -31,7 +31,7 @@
<argumentindex="0"name="name"type="String">
</argument>
<description>
Find an interface by its name. Say that you're making a game that uses specific capabilities of an AR/VR platform you can find the interface for that platform by name and initialize it.
Finds an interface by its name. For instance, if your project uses capabilities of an AR/VR platform, you can find the interface for that platform by name and initialize it.
</description>
</method>
<methodname="get_hmd_transform">
@ -47,21 +47,21 @@
<argumentindex="0"name="idx"type="int">
</argument>
<description>
Get the interface registered at a given index in our list of interfaces.
Gets the interface registered at a given index in our list of interfaces.
Get the number of interfaces currently registered with the AR/VR server. If your game supports multiple AR/VR platforms, you can look through the available interface, and either present the user with a selection or simply try an initialize each interface and use the first one that returns [code]true[/code].
Gets the number of interfaces currently registered with the AR/VR server. If your project supports multiple AR/VR platforms, you can look through the available interface, and either present the user with a selection or simply try to initialize each interface and use the first one that returns [code]true[/code].
</description>
</method>
<methodname="get_interfaces"qualifiers="const">
<returntype="Array">
</return>
<description>
Returns a list of available interfaces with both id and name of the interface.
Returns a list of available interfaces the ID and name of each interface.
</description>
</method>
<methodname="get_last_commit_usec">
@ -86,7 +86,7 @@
<returntype="Transform">
</return>
<description>
Gets our reference frame transform, mostly used internally and exposed for GDNative build interfaces.
Gets the reference frame transform. Mostly used internally and exposed for GDNative build interfaces.
Signal send when a new tracker has been added. If you don't use a fixed number of controllers or if you're using ARVRAnchors for an AR solution it is important to react to this signal and add the appropriate ARVRController or ARVRAnchor node related to this new tracker.
Emitted when a new tracker has been added. If you don't use a fixed number of controllers or if you're using [ARVRAnchor]s for an AR solution, it is important to react to this signal to add the appropriate [ARVRController] or [ARVRAnchor] nodes related to this new tracker.
</description>
</signal>
<signalname="tracker_removed">
@ -147,19 +147,19 @@
<argumentindex="2"name="id"type="int">
</argument>
<description>
Signal send when a tracker is removed, you should remove any ARVRController or ARVRAnchor points if applicable. This is not mandatory, the nodes simply become inactive and will be made active again when a new tracker becomes available (i.e. a new controller is switched on that takes the place of the previous one).
Emitted when a tracker is removed. You should remove any [ARVRController] or [ARVRAnchor] points if applicable. This is not mandatory, the nodes simply become inactive and will be made active again when a new tracker becomes available (i.e. a new controller is switched on that takes the place of the previous one).
var res = as.get_closest_position_in_segment(Vector3(3, 3, 0)) # returns (0, 3, 0)
var res = as.get_closest_position_in_segment(Vector3(3, 3, 0)) # Returns (0, 3, 0)
[/codeblock]
The result is in the segment that goes from [code]y = 0[/code] to [code]y = 5[/code]. It's the closest position in the segment to the given point.
</description>
@ -139,11 +139,11 @@
<argumentindex="1"name="to_id"type="int">
</argument>
<description>
Returns an array with the ids of the points that form the path found by AStar between the given points. The array is ordered from the starting point to the ending point of the path.
Returns an array with the IDs of the points that form the path found by AStar between the given points. The array is ordered from the starting point to the ending point of the path.
[codeblock]
var as = AStar.new()
as.add_point(1, Vector3(0, 0, 0))
as.add_point(2, Vector3(0, 1, 0), 1) # default weight is 1
as.add_point(2, Vector3(0, 1, 0), 1) # Default weight is 1
as.add_point(3, Vector3(1, 1, 0))
as.add_point(4, Vector3(2, 0, 0))
@ -153,7 +153,7 @@
as.connect_points(1, 4, false)
as.connect_points(5, 4, false)
var res = as.get_id_path(1, 3) # returns [1, 2, 3]
var res = as.get_id_path(1, 3) # Returns [1, 2, 3]
[/codeblock]
If you change the 2nd point's weight to 3, then the result will be [code][1, 4, 3][/code] instead, because now even though the distance is longer, it's "easier" to get through point 4 than through point 2.
</description>
@ -164,7 +164,7 @@
<argumentindex="0"name="id"type="int">
</argument>
<description>
Returns an array with the ids of the points that form the connect with the given point.
Returns an array with the IDs of the points that form the connection with the given point.
[codeblock]
var as = AStar.new()
as.add_point(1, Vector3(0, 0, 0))
@ -175,7 +175,7 @@
as.connect_points(1, 2, true)
as.connect_points(1, 3, true)
var neighbors = as.get_point_connections(1) # returns [2, 3]
var neighbors = as.get_point_connections(1) # Returns [2, 3]
[/codeblock]
</description>
</method>
@ -196,7 +196,7 @@
<argumentindex="0"name="id"type="int">
</argument>
<description>
Returns the position of the point associated with the given id.
Returns the position of the point associated with the given [code]id[/code].
var res = as.get_closest_position_in_segment(Vector2(3, 3)) # returns (0, 3)
var res = as.get_closest_position_in_segment(Vector2(3, 3)) # Returns (0, 3)
[/codeblock]
The result is in the segment that goes from [code]y = 0[/code] to [code]y = 5[/code]. It's the closest position in the segment to the given point.
</description>
@ -116,11 +116,11 @@
<argumentindex="1"name="to_id"type="int">
</argument>
<description>
Returns an array with the ids of the points that form the path found by AStar2D between the given points. The array is ordered from the starting point to the ending point of the path.
Returns an array with the IDs of the points that form the path found by AStar2D between the given points. The array is ordered from the starting point to the ending point of the path.
[codeblock]
var as = AStar2D.new()
as.add_point(1, Vector2(0, 0))
as.add_point(2, Vector2(0, 1), 1) # default weight is 1
as.add_point(2, Vector2(0, 1), 1) # Default weight is 1
as.add_point(3, Vector2(1, 1))
as.add_point(4, Vector2(2, 0))
@ -130,7 +130,7 @@
as.connect_points(1, 4, false)
as.connect_points(5, 4, false)
var res = as.get_id_path(1, 3) # returns [1, 2, 3]
var res = as.get_id_path(1, 3) # Returns [1, 2, 3]
[/codeblock]
If you change the 2nd point's weight to 3, then the result will be [code][1, 4, 3][/code] instead, because now even though the distance is longer, it's "easier" to get through point 4 than through point 2.
</description>
@ -141,7 +141,7 @@
<argumentindex="0"name="id"type="int">
</argument>
<description>
Returns an array with the ids of the points that form the connect with the given point.
Returns an array with the IDs of the points that form the connection with the given point.
[codeblock]
var as = AStar2D.new()
as.add_point(1, Vector2(0, 0))
@ -152,7 +152,7 @@
as.connect_points(1, 2, true)
as.connect_points(1, 3, true)
var neighbors = as.get_point_connections(1) # returns [2, 3]
var neighbors = as.get_point_connections(1) # Returns [2, 3]
[/codeblock]
</description>
</method>
@ -173,7 +173,7 @@
<argumentindex="0"name="id"type="int">
</argument>
<description>
Returns the position of the point associated with the given id.
Returns the position of the point associated with the given [code]id[/code].
Adds a button with label [i]text[/i] and a custom [i]action[/i] to the dialog and returns the created button. [i]action[/i] will be passed to the [signal custom_action] signal when pressed.
If [code]true[/code], [i]right[/i] will place the button to the right of any sibling buttons. Default value: [code]false[/code].
Adds a button with label [code]text[/code] and a custom [code]action[/code] to the dialog and returns the created button. [code]action[/code] will be passed to the [signal custom_action] signal when pressed.
If [code]true[/code], [code]right[/code] will place the button to the right of any sibling buttons. Default value: [code]false[/code].
</description>
</method>
<methodname="add_cancel">
@ -29,7 +29,7 @@
<argumentindex="0"name="name"type="String">
</argument>
<description>
Adds a button with label [i]name[/i] and a cancel action to the dialog and returns the created button.
Adds a button with label [code]name[/code] and a cancel action to the dialog and returns the created button.
If [code]true[/code], the dialog is hidden when the OK button is pressed. You can set it to [code]false[/code] if you want to do e.g. input validation when receiving the [signal confirmed] signal, and handle hiding the dialog in your own logic. Default value: [code]true[/code].
Note: Some nodes derived from this class can have a different default value, and potentially their own built-in logic overriding this setting. For example [FileDialog] defaults to [code]false[/code], and has its own input validation code that is called when you press OK, which eventually hides the dialog if the input is valid. As such this property can't be used in [FileDialog] to disable hiding the dialog when pressing OK.
[b]Note:[/b] Some nodes derived from this class can have a different default value, and potentially their own built-in logic overriding this setting. For example [FileDialog] defaults to [code]false[/code], and has its own input validation code that is called when you press OK, which eventually hides the dialog if the input is valid. As such, this property can't be used in [FileDialog] to disable hiding the dialog when pressing OK.
Play the animation set in parameter. If no parameter is provided, the current animation is played. Property [code]backwards[/code] plays the animation in reverse if set to [code]true[/code].
Plays the animation named [code]anim[/code]. If no [code]anim[/code] is provided, the current animation is played. If [code]backwards[/code] is [code]true[/code], the animation will be played in reverse.
</description>
</method>
<methodname="stop">
<returntype="void">
</return>
<description>
Stop the current animation (does not reset the frame counter).
Stops the current animation (does not reset the frame counter).
Set the interpolation type of a given track, from the INTERPOLATION_* enum.
Sets the interpolation type of a given track.
</description>
</method>
<methodname="track_set_key_time">
@ -546,7 +546,7 @@
<argumentindex="2"name="time"type="float">
</argument>
<description>
Set the time of an existing key.
Sets the time of an existing key.
</description>
</method>
<methodname="track_set_key_transition">
@ -559,7 +559,7 @@
<argumentindex="2"name="transition"type="float">
</argument>
<description>
Set the transition curve (easing) for a specific key (see built-in math function "ease").
Sets the transition curve (easing) for a specific key (see the built-in math function [method @GDScript.ease]).
</description>
</method>
<methodname="track_set_key_value">
@ -572,7 +572,7 @@
<argumentindex="2"name="value"type="Variant">
</argument>
<description>
Set the value of an existing key.
Sets the value of an existing key.
</description>
</method>
<methodname="track_set_path">
@ -583,8 +583,8 @@
<argumentindex="1"name="path"type="NodePath">
</argument>
<description>
Set the path of a track. Paths must be valid scene-tree paths to a node, and must be specified starting from the parent node of the node that will reproduce the animation. Tracks that control properties or bones must append their name after the path, separated by ":".
[b]Example:[/b] "character/skeleton:ankle" or "character/mesh:transform/local".
Sets the path of a track. Paths must be valid scene-tree paths to a node, and must be specified starting from the parent node of the node that will reproduce the animation. Tracks that control properties or bones must append their name after the path, separated by [code]":"[/code].
For example, [code]"character/skeleton:ankle"[/code] or [code]"character/mesh:transform/local"[/code].
The total length of the animation (in seconds). Note that length is not delimited by the last key, as this one may be before or after the end to ensure correct interpolation and looping.
The total length of the animation (in seconds).
[b]Note:[/b] Length is not delimited by the last key, as this one may be before or after the end to ensure correct interpolation and looping.
A flag indicating that the animation must loop. This is uses for correct interpolation of animation cycles, and for hinting the player that it must restart the animation.
@ -682,7 +683,7 @@
Value tracks set values in node properties, but only those which can be Interpolated.
Get the property information for parameter. Parameters are custom local memory used for your nodes, given a resource can be reused in multiple trees. Format is similar to [method Object.get_property_list].
Gets the property information for parameter. Parameters are custom local memory used for your nodes, given a resource can be reused in multiple trees. Format is similar to [method Object.get_property_list].
</description>
</method>
<methodname="has_filter"qualifiers="virtual">
@ -176,7 +176,7 @@
<argumentindex="0"name="index"type="int">
</argument>
<description>
Remove an input, call this only when inactive.
Removes an input, call this only when inactive.
</description>
</method>
<methodname="set_filter_path">
@ -187,7 +187,7 @@
<argumentindex="1"name="enable"type="bool">
</argument>
<description>
Add/Remove a path for the filter.
Adds or removes a path for the filter.
</description>
</method>
<methodname="set_parameter">
@ -198,7 +198,7 @@
<argumentindex="1"name="value"type="Variant">
</argument>
<description>
Set a custom parameter. These are used as local storage, because resources can be reused across the tree or scenes.
Sets a custom parameter. These are used as local storage, because resources can be reused across the tree or scenes.
Add a new point that represents a [code]node[/code] on the virtual axis at a given position set by [code]pos[/code]. You can insert it at a specific index using the [code]at_index[/code] argument. If you use the default value for [code]at_index[/code] , the point is inserted at the end of the blend points array.
Adds a new point that represents a [code]node[/code] on the virtual axis at a given position set by [code]pos[/code]. You can insert it at a specific index using the [code]at_index[/code] argument. If you use the default value for [code]at_index[/code] , the point is inserted at the end of the blend points array.
Add a new point that represents a [code]node[/code] at the position set by [code]pos[/code]. You can insert it at a specific index using the [code]at_index[/code] argument. If you use the default value for [code]at_index[/code] , the point is inserted at the end of the blend points array.
Adds a new point that represents a [code]node[/code] at the position set by [code]pos[/code]. You can insert it at a specific index using the [code]at_index[/code] argument. If you use the default value for [code]at_index[/code] , the point is inserted at the end of the blend points array.
Contains multiple nodes representing animation states, connected in a graph. Nodes transitions can be configured to happen automatically or via code, using a shortest-path algorithm. Retrieve the AnimationNodeStateMachinePlayback object from the [AnimationTree] node to control it programatically. Example:
Contains multiple nodes representing animation states, connected in a graph. Node transitions can be configured to happen automatically or via code, using a shortest-path algorithm. Retrieve the AnimationNodeStateMachinePlayback object from the [AnimationTree] node to control it programmatically.
[b]Example:[/b]
[codeblock]
var state_machine = $AnimationTree.get("parameters/playback")
Allows control of [AnimationTree] state machines created with [AnimationNodeStateMachine]. Retrieve with [code]$AnimationTree.get("parameters/playback")[/code]. Example:
Allows control of [AnimationTree] state machines created with [AnimationNodeStateMachine]. Retrieve with [code]$AnimationTree.get("parameters/playback")[/code].
[b]Example:[/b]
[codeblock]
var state_machine = $AnimationTree.get("parameters/playback")
An animation player is used for generalpurpose playback of [Animation] resources. It contains a dictionary of animations (referenced by name) and custom blend times between their transitions. Additionally, animations can be played and blended in different channels.
An animation player is used for general-purpose playback of [Animation] resources. It contains a dictionary of animations (referenced by name) and custom blend times between their transitions. Additionally, animations can be played and blended in different channels.
Get the actual playing speed of current animation or 0 if not playing. This speed is the [code]playback_speed[/code] property multiplied by [code]custom_speed[/code] argument specified when calling the [code]play[/code] method.
Gets the actual playing speed of current animation or 0 if not playing. This speed is the [code]playback_speed[/code] property multiplied by [code]custom_speed[/code] argument specified when calling the [code]play[/code] method.
Play the animation with key [code]name[/code]. Custom speed and blend times can be set. If custom speed is negative (-1), 'from_end' being [code]true[/code] can play the animation backwards.
If the animation has been paused by [code]stop(true)[/code] it will be resumed. Calling [code]play()[/code] without arguments will also resume the animation.
Plays the animation with key [code]name[/code]. Custom speed and blend times can be set. If [code]custom_speed[/code] is negative and [code]from_end[/code] is [code]true[/code], the animation will play backwards.
If the animation has been paused by [method stop], it will be resumed. Calling [method play] without arguments will also resume the animation.
Play the animation with key [code]name[/code] in reverse.
If the animation has been paused by [code]stop(true)[/code] it will be resumed backwards. Calling [code]play_backwards()[/code] without arguments will also resume the animation backwards.
Plays the animation with key [code]name[/code] in reverse.
If the animation has been paused by [code]stop(true)[/code], it will be resumed backwards. Calling [code]play_backwards()[/code] without arguments will also resume the animation backwards.
</description>
</method>
<methodname="queue">
@ -164,7 +164,7 @@
<argumentindex="0"name="name"type="String">
</argument>
<description>
Queue an animation for playback once the current one is done.
Queues an animation for playback once the current one is done.
</description>
</method>
<methodname="remove_animation">
@ -173,7 +173,7 @@
<argumentindex="0"name="name"type="String">
</argument>
<description>
Remove the animation with key [code]name[/code].
Removes the animation with key [code]name[/code].
</description>
</method>
<methodname="rename_animation">
@ -184,7 +184,7 @@
<argumentindex="1"name="newname"type="String">
</argument>
<description>
Rename an existing animation with key [code]name[/code] to [code]newname[/code].
Renames an existing animation with key [code]name[/code] to [code]newname[/code].
Seek the animation to the [code]seconds[/code] point in time (in seconds). If [code]update[/code] is [code]true[/code], the animation updates too, otherwise it updates at process time. Events between the current frame and [code]seconds[/code] are skipped.
Seeks the animation to the [code]seconds[/code] point in time (in seconds). If [code]update[/code] is [code]true[/code], the animation updates too, otherwise it updates at process time. Events between the current frame and [code]seconds[/code] are skipped.
</description>
</method>
<methodname="set_blend_time">
@ -208,7 +208,7 @@
<argumentindex="2"name="sec"type="float">
</argument>
<description>
Specify a blend time (in seconds) between two animations, referenced by their names.
Specifies a blend time (in seconds) between two animations, referenced by their names.
Stop the currently playing animation. If [code]reset[/code] is [code]true[/code], the animation position is reset to [code]0[/code] and the playback speed is reset to [code]1.0[/code].
If [code]reset[/code] is [code]false[/code], then calling [code]play()[/code] without arguments or [code]play("same_as_before")[/code] will resume the animation. Works the same for the [code]play_backwards()[/code] method.
Stops the currently playing animation. If [code]reset[/code] is [code]true[/code], the animation position is reset to [code]0[/code] and the playback speed is reset to [code]1.0[/code].
If [code]reset[/code] is [code]false[/code], then calling [method play] without arguments or [code]play("same_as_before")[/code] will resume the animation. Works the same for the [method play_backwards].
</description>
</method>
</methods>
@ -251,7 +251,7 @@
The process notification in which to update animations. Default value: [constant ANIMATION_PROCESS_IDLE].
The speed scaling ratio. For instance, if this value is 1 then the animation plays at normal speed. If it's 0.5 then it plays at half speed. If it's 2 then it plays at double speed. Default value: [code]1[/code].
The speed scaling ratio. For instance, if this value is 1, then the animation plays at normal speed. If it's 0.5, then it plays at half speed. If it's 2, then it plays at double speed. Default value: [code]1[/code].
Batch method calls during the animation process, then do the calls after events are processed. This avoids bugs involving deleting nodes or modifying the AnimationPlayer while playing.
Generalpurpose area node for detection and 3D physics influence.
General-purpose area node for detection and 3D physics influence.
</brief_description>
<description>
3D area that detects [CollisionObject] nodes overlapping, entering, or exiting. Can also alter or override local physics parameters (gravity, damping).
@ -47,7 +47,8 @@
<argumentindex="0"name="area"type="Node">
</argument>
<description>
If [code]true[/code], the given area overlaps the Area. Note that the result of this test is not immediate after moving objects. For performance, list of overlaps is updated once per frame and before the physics step. Consider using signals instead.
If [code]true[/code], the given area overlaps the Area.
[b]Note:[/b] The result of this test is not immediate after moving objects. For performance, list of overlaps is updated once per frame and before the physics step. Consider using signals instead.
</description>
</method>
<methodname="overlaps_body"qualifiers="const">
@ -56,7 +57,8 @@
<argumentindex="0"name="body"type="Node">
</argument>
<description>
If [code]true[/code], the given physics body overlaps the Area. Note that the result of this test is not immediate after moving objects. For performance, list of overlaps is updated once per frame and before the physics step. Consider using signals instead.
If [code]true[/code], the given physics body overlaps the Area.
[b]Note:[/b] The result of this test is not immediate after moving objects. For performance, list of overlaps is updated once per frame and before the physics step. Consider using signals instead.
The [code]body[/code] argument can either be a [PhysicsBody] or a [GridMap] instance (while GridMaps are not physics body themselves, they register their tiles with collision shapes as a virtual physics body).
</description>
</method>
@ -106,7 +108,7 @@
The falloff factor for point gravity. The greater the value, the faster gravity decreases with distance.
If [code]true[/code], gravity is calculated from a point (set via [member gravity_vec]). Also see [member space_override]. Default value: [code]false[/code].
If [code]true[/code], gravity is calculated from a point (set via [member gravity_vec]). See also [member space_override]. Default value: [code]false[/code].
If [code]true[/code], the given area overlaps the Area2D. Note that the result of this test is not immediate after moving objects. For performance, list of overlaps is updated once per frame and before the physics step. Consider using signals instead.
If [code]true[/code], the given area overlaps the Area2D.
[b]Note:[/b] The result of this test is not immediate after moving objects. For performance, list of overlaps is updated once per frame and before the physics step. Consider using signals instead.
</description>
</method>
<methodname="overlaps_body"qualifiers="const">
@ -56,7 +57,8 @@
<argumentindex="0"name="body"type="Node">
</argument>
<description>
If [code]true[/code], the given physics body overlaps the Area2D. Note that the result of this test is not immediate after moving objects. For performance, list of overlaps is updated once per frame and before the physics step. Consider using signals instead.
If [code]true[/code], the given physics body overlaps the Area2D.
[b]Note:[/b] The result of this test is not immediate after moving objects. For performance, list of overlaps is updated once per frame and before the physics step. Consider using signals instead.
The [code]body[/code] argument can either be a [PhysicsBody2D] or a [TileMap] instance (while TileMaps are not physics body themselves, they register their tiles with collision shapes as a virtual physics body).
</description>
</method>
@ -106,7 +108,7 @@
The falloff factor for point gravity. The greater the value, the faster gravity decreases with distance.
If [code]true[/code], gravity is calculated from a point (set via [member gravity_vec]). Also see [member space_override]. Default value: [code]false[/code].
If [code]true[/code], gravity is calculated from a point (set via [member gravity_vec]). See also [member space_override]. Default value: [code]false[/code].
Generic array which can contain several elements of any type, accessible by a numerical index starting at 0. Negative indices can be used to count from the back, like in Python (-1 is the last element, -2 the second to last, etc.). Example:
Generic array which can contain several elements of any type, accessible by a numerical index starting at 0. Negative indices can be used to count from the back, like in Python (-1 is the last element, -2 the second to last, etc.).
Finds the index of an existing value (or the insertion index that maintains sorting order, if the value is not yet present in the array) using binary search. Optionally, a before specifier can be passed. If [code]false[/code], the returned index comes after all existing entries of the value in the array. Note that calling bsearch on an unsorted array results in unexpected behavior.
Finds the index of an existing value (or the insertion index that maintains sorting order, if the value is not yet present in the array) using binary search. Optionally, a [code]before[/code] specifier can be passed. If [code]false[/code], the returned index comes after all existing entries of the value in the array.
[b]Note:[/b] Calling [method bsearch] on an unsorted array results in unexpected behavior.
Finds the index of an existing value (or the insertion index that maintains sorting order, if the value is not yet present in the array) using binary search and a custom comparison method. Optionally, a before specifier can be passed. If [code]false[/code], the returned index comes after all existing entries of the value in the array. The custom method receives two arguments (an element from the array and the value searched for) and must return [code]true[/code] if the first argument is less than the second, and return [code]false[/code] otherwise. Note that calling bsearch on an unsorted array results in unexpected behavior.
Finds the index of an existing value (or the insertion index that maintains sorting order, if the value is not yet present in the array) using binary search and a custom comparison method. Optionally, a [code]before[/code] specifier can be passed. If [code]false[/code], the returned index comes after all existing entries of the value in the array. The custom method receives two arguments (an element from the array and the value searched for) and must return [code]true[/code] if the first argument is less than the second, and return [code]false[/code] otherwise.
[b]Note:[/b] Calling [method bsearch] on an unsorted array results in unexpected behavior.
</description>
</method>
<methodname="clear">
<description>
Clears the array (resizes to 0).
Clears the array. This is equivalent to using [method resize] with a size of [code]0[/code].
</description>
</method>
<methodname="count">
@ -275,7 +278,7 @@
<argumentindex="0"name="size"type="int">
</argument>
<description>
Resizes the array to contain a different number of elements. If the array size is smaller, elements are cleared, if bigger, new elements are Null.
Resizes the array to contain a different number of elements. If the array size is smaller, elements are cleared, if bigger, new elements are [code]null[/code].
</description>
</method>
<methodname="rfind">
@ -303,7 +306,8 @@
</method>
<methodname="sort">
<description>
Sorts the array. Note: strings are sorted in alphabetical, not natural order.
Sorts the array.
[b]Note:[/b] strings are sorted in alphabetical, not natural order.
Updates a specified region of mesh arrays on GPU. Warning: only use if you know what you are doing. You can easily cause crashes by calling this function with improper arguments.
Updates a specified region of mesh arrays on the GPU.
[b]Warning:[/b] Only use if you know what you are doing. You can easily cause crashes by calling this function with improper arguments.
The margin around the region. The [Rect2]'s 'size' parameter ('w' and 'h' in the editor) resizes the texture so it fits within the margin.
The margin around the region. The [Rect2]'s [member Rect2.size] parameter ("w" and "h" in the editor) resizes the texture so it fits within the margin.
Amount of amplification. Positive values make the sound louder, negative values make it quieter. Value can range from -80 to 24. Default value: [code]0[/code].
Amount of amplification in decibels. Positive values make the sound louder, negative values make it quieter. Value can range from -80 to 24. Default value: [code]0[/code].
Reduces sounds that exceed a certain threshold level, smooths out the dynamics and increases the overall volume.
</brief_description>
<description>
Dynamic range compressor reduces the level of the sound when the amplitude goes over a certain threshold in Decibels. One of the main uses of a compressor is to increase the dynamic range by clipping as little as possible (when sound goes over 0dB).
Compressor has many uses in the mix:
- In the Master bus to compress the whole output (Although a [AudioEffectLimiter] is probably better)
- In the Master bus to compress the whole output (although an [AudioEffectLimiter] is probably better).
- In voice channels to ensure they sound as balanced as possible.
- Sidechained. Sidechained, which can reduce the sound level sidechained with another audio bus for threshold detection.. This technique is very common in video game mixing to download the level of Music/SFX while voices are being heard.
- Sidechained. This can reduce the sound level sidechained with another audio bus for threshold detection. This technique is common in video game mixing to the level of music and SFX while voices are being heard.
- Accentuates transients by using a wider attack, making effects sound more punchy.
Amount of compression applied to the audio once it passes the threshold level. The higher the ratio the more the loud parts of the audio will be compressed. Value can range from 1 to 48. Default value: [code]4[/code].
Amount of compression applied to the audio once it passes the threshold level. The higher the ratio, the more the loud parts of the audio will be compressed. Value can range from 1 to 48. Default value: [code]4[/code].
Compressor's delay time to stop reducing the signal after the signal level falls below the threshold. Value can range from 20 to 2000. Default value: [code]250ms[/code].
Compressor's delay time to stop reducing the signal after the signal level falls below the threshold, in milliseconds. Value can range from 20 to 2000. Default value: [code]250ms[/code].
Modify the sound and make it dirty. Different types are available: clip, tan, lofi (bit crushing), overdrive, or waveshape.
Modify the sound and make it dirty. Different types are available: clip, tan, lo-fi (bit crushing), overdrive, or waveshape.
By distorting the waveform the frequency content change, which will often make the sound "crunchy" or "abrasive". For games, it can simulate sound coming from some saturated device or speaker very efficiently.
</description>
<tutorials>
@ -17,7 +17,7 @@
Distortion power. Value can range from 0 to 1. Default value: [code]0[/code].
High-pass filter. Frequencies higher than this value will not be affected by the distortion. Value can range from 1 to 20000. Default value: [code]16000[/code].
High-pass filter, in Hz. Frequencies higher than this value will not be affected by the distortion. Value can range from 1 to 20000. Default value: [code]16000[/code].
Use it to create a custom equalizer if [AudioEffectEQ6], [AudioEffectEQ10] or [AudioEffectEQ21] don't fit your needs.
</brief_description>
<description>
AudioEffectEQ gives you control over frequencies. Use it to compensate for existing deficiencies in audio. AudioEffectEQ are very useful on the Master Bus to completely master a mix and give it character. They are also very useful when a game is run on a mobile device, to adjust the mix to that kind of speakers (it can be added but disabled when headphones are plugged).
AudioEffectEQ gives you control over frequencies. Use it to compensate for existing deficiencies in audio. AudioEffectEQs are useful on the Master bus to completely master a mix and give it more character. They are also useful when a game is run on a mobile device, to adjust the mix to that kind of speakers (it can be added but disabled when headphones are plugged).
Adds a soft clip Limiter audio effect to an Audio bus.
Adds a soft-clip limiter audio effect to an Audio bus.
</brief_description>
<description>
A limiter is similar to a compressor, but it's less flexible and designed to disallow sound going over a given dB threshold. Adding one in the Master Bus is always recommended to reduce the effects of clipping.
A limiter is similar to a compressor, but it's less flexible and designed to disallow sound going over a given dB threshold. Adding one in the Master bus is always recommended to reduce the effects of clipping.
Soft clipping starts to reduce the peaks a little below the threshold level and progressively increases its effect as the input level increases such that the threshold is never exceeded.
AudioServer is a lowlevel server interface for audio access. It is in charge of creating sample data (playable audio) as well as its playback via a voice interface.
AudioServer is a low-level server interface for audio access. It is in charge of creating sample data (playable audio) as well as its playback via a voice interface.
Stores audio data loaded from [code].wav[/code] files.
Stores audio data loaded from WAV files.
</brief_description>
<description>
AudioStreamSample stores sound samples loaded from [code].wav[/code] files. To play the stored sound use an [AudioStreamPlayer] (for background music) or [AudioStreamPlayer2D]/[AudioStreamPlayer3D] (for positional audio). The sound can be looped.
This class can also be used to store dynamicallygenerated PCM audio data.
AudioStreamSample stores sound samples loaded from WAV files. To play the stored sound, use an [AudioStreamPlayer] (for non-positional audio) or [AudioStreamPlayer2D]/[AudioStreamPlayer3D] (for positional audio). The sound can be looped.
This class can also be used to store dynamically-generated PCM audio data.
</description>
<tutorials>
</tutorials>
@ -17,7 +17,7 @@
</argument>
<description>
Saves the AudioStreamSample as a WAV file to [code]path[/code]. Samples with IMA ADPCM format can't be saved.
Note that a [code].wav[/code] extension is automatically appended to [code]path[/code] if it is missing.
[b]Note:[/b] A [code].wav[/code] extension is automatically appended to [code]path[/code] if it is missing.
Copies a region of the screen (or the whole screen) to a buffer so it can be accessed with [code]SCREEN_TEXTURE[/code] in the [code]texture()[/code] function.
</brief_description>
<description>
Node for back-buffering the currentlydisplayed screen. The region defined in the BackBufferCopy node is bufferized with the content of the screen it covers, or the entire screen according to the copy mode set. Use [code]SCREEN_TEXTURE[/code] in the [code]texture()[/code] function to access the buffer.
Node for back-buffering the currently-displayed screen. The region defined in the BackBufferCopy node is bufferized with the content of the screen it covers, or the entire screen according to the copy mode set. Use [code]SCREEN_TEXTURE[/code] in the [code]texture()[/code] function to access the buffer.
If [code]true[/code], lightmap can capture light values greater than [code]1.0[/code]. Turning this off will result in a smaller lightmap. Default value:[code]false[/code].
If [code]true[/code], the lightmap can capture light values greater than [code]1.0[/code]. Turning this off will result in a smaller file size. Default value: [code]false[/code].
Returns the visual state used to draw the button. This is useful mainly when implementing your own draw code by either overriding _draw() or connecting to "draw" signal. The visual state of the button is defined by the DRAW_* enum.
Returns the visual state used to draw the button. This is useful mainly when implementing your own draw code by either overriding _draw() or connecting to "draw" signal. The visual state of the button is defined by the [code]DRAW_*[/code] enum.
Binary mask to choose which mouse buttons this button will respond to.
@ -86,14 +86,14 @@
</signal>
<signalname="pressed">
<description>
This signal is emitted every time the button is toggled or pressed (i.e. activated, so on [code]button_down[/code] if "Click on press" is active and on [code]button_up[/code] otherwise).
Emitted when the button is toggled or pressed. This is on [signal button_down] if [member action_mode] is [constant ACTION_MODE_BUTTON_PRESS] and on [signal button_up] otherwise.
This signal is emitted when the button was just toggled between pressed and normal states (only if toggle_mode is active). The new state is contained in the [i]button_pressed[/i] argument.
Emitted when the button was just toggled between pressed and normal states (only if [member toggle_mode] is active). The new state is contained in the [code]button_pressed[/code] argument.
3x3 matrix used for 3D rotation and scale. Contains 3 vector fields x,y and z as its columns, which can be interpreted as the local basis vectors of a transformation. Can also be accessed as array of 3D vectors. These vectors are orthogonal to each other, but are not necessarily normalized (due to scaling). Almost always used as orthogonal basis for a [Transform].
3×3 matrix used for 3D rotation and scale. Contains 3 vector fields X, Y and Z as its columns, which can be interpreted as the local basis vectors of a transformation. Can also be accessed as array of 3D vectors. These vectors are orthogonal to each other, but are not necessarily normalized (due to scaling). Almost always used as an orthogonal basis for a [Transform].
For such use, it is composed of a scaling and a rotation matrix, in that order (M = R.S).
</description>
<tutorials>
@ -26,7 +26,7 @@
<argumentindex="0"name="from"type="Vector3">
</argument>
<description>
Create a rotation matrix (in the YXZ convention: first Z, then X, and Y last) from the specified Euler angles, given in the vector format as (X-angle, Y-angle, Z-angle).
Create a rotation matrix (in the YXZ convention: first Z, then X, and Y last) from the specified Euler angles, given in the vector format as (X angle, Y angle, Z angle).
</description>
</method>
<methodname="Basis">
@ -64,7 +64,7 @@
<returntype="Vector3">
</return>
<description>
Assuming that the matrix is a proper rotation matrix (orthonormal matrix with determinant +1), return Euler angles (in the YXZ convention: first Z, then X, and Y last). Returned vector contains the rotation angles in the format (X-angle, Y-angle, Z-angle).
Assuming that the matrix is a proper rotation matrix (orthonormal matrix with determinant +1), return Euler angles (in the YXZ convention: first Z, then X, and Y last). Returned vector contains the rotation angles in the format (X angle, Y angle, Z angle).
</description>
</method>
<methodname="get_orthogonal_index">
@ -148,7 +148,7 @@
<argumentindex="0"name="with"type="Vector3">
</argument>
<description>
Transposed dot product with the x axis of the matrix.
Transposed dot product with the X axis of the matrix.
</description>
</method>
<methodname="tdoty">
@ -157,7 +157,7 @@
<argumentindex="0"name="with"type="Vector3">
</argument>
<description>
Transposed dot product with the y axis of the matrix.
Transposed dot product with the Y axis of the matrix.
</description>
</method>
<methodname="tdotz">
@ -166,7 +166,7 @@
<argumentindex="0"name="with"type="Vector3">
</argument>
<description>
Transposed dot product with the z axis of the matrix.
Transposed dot product with the Z axis of the matrix.
</description>
</method>
<methodname="transposed">
@ -191,19 +191,20 @@
<argumentindex="0"name="v"type="Vector3">
</argument>
<description>
Returns a vector transformed (multiplied) by the transposed matrix. Note that this results in a multiplication by the inverse of the matrix only if it represents a rotation-reflection.
Returns a vector transformed (multiplied) by the transposed matrix.
[b]Note:[/b] This results in a multiplication by the inverse of the matrix only if it represents a rotation-reflection.
Adds a character to the font, where [code]character[/code] is the unicode value, [code]texture[/code] is the texture index, [code]rect[/code] is the region in the texture (in pixels!), [code]align[/code] is the (optional) alignment for the character and [code]advance[/code] is the (optional) advance.
Adds a character to the font, where [code]character[/code] is the Unicode value, [code]texture[/code] is the texture index, [code]rect[/code] is the region in the texture (in pixels!), [code]align[/code] is the (optional) alignment for the character and [code]advance[/code] is the (optional) advance.
When this property is enabled, text that is too large to fit the button is clipped, when disabled the Button will always be wide enough to hold the text. This property is disabled by default.
The sphere's radius if [enum EmissionShape] is set to [constant EMISSION_SHAPE_SPHERE].
@ -110,19 +110,19 @@
The particle system's frame rate is fixed to a value. For instance, changing the value to 2 will make the particles render at 2 frames per second. Note this does not slow down the particle system itself.
The sphere's radius if [member emission_shape] is set to [constant EMISSION_SHAPE_SPHERE].
@ -111,12 +111,12 @@
The particle system's frame rate is fixed to a value. For instance, changing the value to 2 will make the particles render at 2 frames per second. Note this does not slow down the simulation of the particle system itself.
Camera is a special node that displays what is visible from its current location. Cameras register themselves in the nearest [Viewport] node (when ascending the tree). Only one camera can be active per viewport. If no viewport is available ascending the tree, the Camera will register in the global viewport. In other words, a Camera just provides [i]3D[/i] display capabilities to a [Viewport], and, without one, a scene registered in that [Viewport] (or higher viewports) can't be displayed.
Camera is a special node that displays what is visible from its current location. Cameras register themselves in the nearest [Viewport] node (when ascending the tree). Only one camera can be active per viewport. If no viewport is available ascending the tree, the camera will register in the global viewport. In other words, a camera just provides 3D display capabilities to a [Viewport], and, without one, a scene registered in that [Viewport] (or higher viewports) can't be displayed.
If this is the current Camera, remove it from being current. If [code]enable_next[/code] is [code]true[/code], request to make the next Camera current, if any.
If this is the current camera, remove it from being current. If [code]enable_next[/code] is [code]true[/code], request to make the next camera current, if any.
</description>
</method>
<methodname="get_camera_rid"qualifiers="const">
@ -29,7 +29,7 @@
<returntype="Transform">
</return>
<description>
Gets the camera transform. Subclassed cameras (such as CharacterCamera) may provide different transforms than the [Node] transform.
Gets the camera transform. Subclassed cameras such as [InterpolatedCamera] may provide different transforms than the [Node] transform.
Returns [code]true[/code] if the given position is behind the Camera. Note that a position which returns [code]false[/code] may still be outside the Camera's field of view.
Returns [code]true[/code] if the given position is behind the camera.
[b]Note:[/b] A position which returns [code]false[/code] may still be outside the camera's field of view.
</description>
</method>
<methodname="make_current">
<returntype="void">
</return>
<description>
Makes this camera the current Camera for the [Viewport] (see class description). If the Camera Node is outside the scene tree, it will attempt to become current once it's added.
Makes this camera the current camera for the [Viewport] (see class description). If the camera node is outside the scene tree, it will attempt to become current once it's added.
Sets the camera projection to orthogonal mode, by specifying a width and the [i]near[/i] and [i]far[/i] clip planes in worldspace units. (As a hint, 2D games often use this projection, with values specified in pixels)
Sets the camera projection to orthogonal mode, by specifying a width and the [code]near[/code] and [code]far[/code] clip planes in worldspace units. (As a hint, 2D games often use this projection, with values specified in pixels)
</description>
</method>
<methodname="set_perspective">
@ -147,7 +148,7 @@
<argumentindex="2"name="z_far"type="float">
</argument>
<description>
Sets the camera projection to perspective mode, by specifying a [i]FOV[/i] Y angle in degrees (FOV means Field of View), and the [i]near[/i] and [i]far[/i] clip planes in worldspace units.
Sets the camera projection to perspective mode, by specifying a [code]fov[/code] angle in degrees (FOV means Field of View), and the [code]near[/code] and [code]far[/code] clip planes in world-space units.
If not [constant DOPPLER_TRACKING_DISABLED] this Camera will simulate the Doppler effect for objects changed in particular [code]_process[/code] methods. Default value: [constant DOPPLER_TRACKING_DISABLED].
If not [constant DOPPLER_TRACKING_DISABLED], this camera will simulate the Doppler effect for objects changed in particular [code]_process[/code] methods. See [enum DopplerTracking] for possible values. Default value: [constant DOPPLER_TRACKING_DISABLED].
The camera's field of view angle (in degrees). Only applicable in perspective mode. Since [member keep_aspect] locks one axis, [code]fov[/code] sets the other axis' field of view angle.
The camera's projection mode. In [constant PROJECTION_PERSPECTIVE] mode, objects' z-distance from the camera's local space scales their perceived size.
The camera's projection mode. In [constant PROJECTION_PERSPECTIVE] mode, objects' Z distance from the camera's local space scales their perceived size.
The camera's size measured as 1/2 the width or height. Only applicable in orthogonal mode. Since [member keep_aspect] locks on axis, [code]size[/code] sets the other axis' size length.
Preserves the horizontal aspect ratio; also known as Vert- scaling. This is usually the best option for projects running in portrait mode, as taller aspect ratios will benefit from a wider vertical FOV.
Preserves the vertical aspect ratio; also known as Hor+ scaling. This is usually the best option for projects running in landscape mode, as wider aspect ratios will automatically benefit from a wider horizontal FOV.
Simulate Doppler effect by tracking positions of objects that are changed in [code]_process[/code]. Changes in the relative velocity of this Camera compared to those objects affect how Audio is perceived (changing the Audio's [code]pitch shift[/code]).
Simulate Doppler effect by tracking positions of objects that are changed in [code]_process[/code]. Changes in the relative velocity of this camera compared to those objects affect how Audio is perceived (changing the Audio's [code]pitch shift[/code]).
Simulate Doppler effect by tracking positions of objects that are changed in [code]_physics_process[/code]. Changes in the relative velocity of this Camera compared to those objects affect how Audio is perceived (changing the Audio's [code]pitch shift[/code]).
Simulate Doppler effect by tracking positions of objects that are changed in [code]_physics_process[/code]. Changes in the relative velocity of this camera compared to those objects affect how Audio is perceived (changing the Audio's [code]pitch shift[/code]).
Camera node for 2D scenes. It forces the screen (current layer) to scroll following this node. This makes it easier (and faster) to program scrollable scenes than manually changing the position of [CanvasItem]based nodes.
This node is intended to be a simple helper to get things going quickly and it may happen often that more functionality is desired to change how the camera works. To make your own custom camera node, simply inherit from [Node2D] and change the transform of the canvas by calling get_viewport().set_canvas_transform(m) in [Viewport].
Camera node for 2D scenes. It forces the screen (current layer) to scroll following this node. This makes it easier (and faster) to program scrollable scenes than manually changing the position of [CanvasItem]-based nodes.
This node is intended to be a simple helper to get things going quickly and it may happen that more functionality is desired to change how the camera works. To make your own custom camera node, simply inherit from [Node2D] and change the transform of the canvas by calling get_viewport().set_canvas_transform(m) in [Viewport].
Make this the current 2D camera for the scene (viewport and layer), in case there's many cameras in the scene.
Make this the current 2D camera for the scene (viewport and layer), in case there are many cameras in the scene.
</description>
</method>
<methodname="reset_smoothing">
<returntype="void">
</return>
<description>
Set the camera's position immediately to its current smoothing destination.
Sets the camera's position immediately to its current smoothing destination.
This has no effect if smoothing is disabled.
</description>
</method>
@ -69,7 +69,7 @@
If [code]true[/code], the camera is the active camera for the current scene. Only one camera can be current, so setting a different camera [code]current[/code] will disable this one.
Bottom margin needed to drag the camera. A value of [code]1[/code] makes the camera move only when reaching the edge of the screen.
@ -90,13 +90,13 @@
If [code]true[/code], the camera only moves when reaching the vertical drag margins. If [code]false[/code], the camera moves vertically regardless of margins. Default value: [code]true[/code].
The camera's zoom relative to the viewport. Values larger than [code]Vector2(1, 1)[/code] zoom out and smaller values zoom in. For an example, use [code]Vector2(0.5, 0.5)[/code] for a 2x zoom in, and [code]Vector2(4, 4)[/code] for a 4x zoom out.
The camera's zoom relative to the viewport. Values larger than [code]Vector2(1, 1)[/code] zoom out and smaller values zoom in. For an example, use [code]Vector2(0.5, 0.5)[/code] for a 2× zoom-in, and [code]Vector2(4, 4)[/code] for a 4× zoom-out.
A camera feed gives you access to a single physical camera attached to your device.
</brief_description>
<description>
A camera feed gives you access to a single physical camera attached to your device.
When enabled Godot will start capturing frames from the camera which can then be used. Do note that many cameras will return YCbCr images which are split into two textures and need to be combined in a shader. Godot does this automatically for you if you set the environment to show the camera image in the background.
A camera feed gives you access to a single physical camera attached to your device. When enabled, Godot will start capturing frames from the camera which can then be used.
[b]Note:[/b] Many cameras will return YCbCr images which are split into two textures and need to be combined in a shader. Godot does this automatically for you if you set the environment to show the camera image in the background.
This texture gives access to the camera texture provided by a [CameraFeed]. Note that many cameras supply YCbCr images which need to be converted in a shader.
This texture gives access to the camera texture provided by a [CameraFeed].
[b]Note:[/b] Many cameras supply YCbCr images which need to be converted in a shader.
Disable blending mode. Colors including alpha are written as-is. Only applicable for render targets with a transparent background. No lighting will be applied.
Disables blending mode. Colors including alpha are written as-is. Only applicable for render targets with a transparent background. No lighting will be applied.
The CanvasItem's transform has changed. This notification is only received if enabled by [method set_notify_transform] or [method set_notify_local_transform].
Returns the value of the integer constant 'name' of 'class' or its ancestry. Always returns 0 when the constant could not be found.
Returns the value of the integer constant [code]name[/code] of [code]class[/code] or its ancestry. Always returns 0 when the constant could not be found.
Returns an array with all the methods of 'class' or its ancestry if 'no_inheritance' is [code]false[/code]. Every element of the array is a [Dictionary] with the following keys: args, default_args, flags, id, name, return: (class_name, hint, hint_string, name, type, usage).
Returns an array with all the methods of [code]class[/code] or its ancestry if [code]no_inheritance[/code] is [code]false[/code]. Every element of the array is a [Dictionary] with the following keys: [code]args[/code], [code]default_args[/code], [code]flags[/code], [code]id[/code], [code]name[/code], [code]return: (class_name, hint, hint_string, name, type, usage)[/code].
Returns an array with all the properties of 'class' or its ancestry if 'no_inheritance' is [code]false[/code].
Returns an array with all the properties of [code]class[/code] or its ancestry if [code]no_inheritance[/code] is [code]false[/code].
</description>
</method>
<methodname="class_get_signal"qualifiers="const">
@ -99,7 +99,7 @@
<argumentindex="1"name="signal"type="String">
</argument>
<description>
Returns the 'signal' data of 'class' or its ancestry. The returned value is a [Dictionary] with the following keys: args, default_args, flags, id, name, return: (class_name, hint, hint_string, name, type, usage).
Returns the [code]signal[/code] data of [code]class[/code] or its ancestry. The returned value is a [Dictionary] with the following keys: [code]args[/code], [code]default_args[/code], [code]flags[/code], [code]id[/code], [code]name[/code], [code]return: (class_name, hint, hint_string, name, type, usage)[/code].
Returns an array with all the signals of 'class' or its ancestry if 'no_inheritance' is [code]false[/code]. Every element of the array is a [Dictionary] as described in [method class_get_signal].
Returns an array with all the signals of [code]class[/code] or its ancestry if [code]no_inheritance[/code] is [code]false[/code]. Every element of the array is a [Dictionary] as described in [method class_get_signal].
Editor-only class for defining a collision polygon in 3D space.
</brief_description>
<description>
Allows editing a collision polygon's vertices on a selected plane. Can also set a depth perpendicular to that plane. This class is only available in the editor. It will not appear in the scene tree at runtime. Creates a [Shape] for gameplay. Properties modified during gameplay will have no effect.
Allows editing a collision polygon's vertices on a selected plane. Can also set a depth perpendicular to that plane. This class is only available in the editor. It will not appear in the scene tree at run-time. Creates a [Shape] for gameplay. Properties modified during gameplay will have no effect.
</description>
<tutorials>
</tutorials>
@ -18,7 +18,8 @@
If [code]true[/code], no collision will be produced.
Array of vertices which define the polygon. Note that the returned value is a copy of the original. Methods which mutate the size or properties of the return value will not impact the original polygon. To change properties of the polygon, assign it to a temporary variable and make changes before reassigning the [code]polygon[/code] member.
Array of vertices which define the polygon.
[b]Note:[/b] The returned value is a copy of the original. Methods which mutate the size or properties of the return value will not impact the original polygon. To change properties of the polygon, assign it to a temporary variable and make changes before reassigning the [code]polygon[/code] member.
Color in RGBA format with some support for ARGB format.
</brief_description>
<description>
A color is represented by red, green, and blue [code](r, g, b)[/code] components. Additionally, [code]a[/code] represents the alpha component, often used for transparency. Values are in floatingpoint and usually range from 0 to 1. Some properties (such as [member CanvasItem.modulate]) may accept values > 1.
A color is represented by red, green, and blue [code](r, g, b)[/code] components. Additionally, [code]a[/code] represents the alpha component, often used for transparency. Values are in floating-point and usually range from 0 to 1. Some properties (such as [member CanvasItem.modulate]) may accept values greater than 1.
You can also create a color from standardized color names by using [method @GDScript.ColorN].
</description>
<tutorials>
@ -19,9 +19,9 @@
Constructs a color from an HTML hexadecimal color string in ARGB or RGB format. See also [method @GDScript.ColorN].
[codeblock]
# Each of the following creates the same color RGBA(178, 217, 10, 255)
var c1 = Color("#ffb2d90a") # ARGB format with '#'
var c1 = Color("#ffb2d90a") # ARGB format with "#"
var c2 = Color("ffb2d90a") # ARGB format
var c3 = Color("#b2d90a") # RGB format with '#'
var c3 = Color("#b2d90a") # RGB format with "#"
var c4 = Color("b2d90a") # RGB format
[/codeblock]
</description>
@ -136,7 +136,7 @@
The gray value is calculated as [code](r + g + b) / 3[/code].
[codeblock]
var c = Color(0.2, 0.45, 0.82)
var gray = c.gray() # a value of 0.466667
var gray = c.gray() # A value of 0.466667
[/codeblock]
</description>
</method>
@ -147,7 +147,7 @@
Returns the inverted color [code](1 - r, 1 - g, 1 - b, a)[/code].
[codeblock]
var c = Color(0.3, 0.4, 0.9)
var inverted_color = c.inverted() # a color of an RGBA(178, 153, 26, 255)
var inverted_color = c.inverted() # A color of an RGBA(178, 153, 26, 255)
[/codeblock]
</description>
</method>
@ -176,7 +176,7 @@
[codeblock]
var c1 = Color(1.0, 0.0, 0.0)
var c2 = Color(0.0, 1.0, 0.0)
var li_c = c1.linear_interpolate(c2, 0.5) # a color of an RGBA(128, 128, 0, 255)
var li_c = c1.linear_interpolate(c2, 0.5) # A color of an RGBA(128, 128, 0, 255)
[/codeblock]
</description>
</method>
@ -234,8 +234,8 @@
Setting [code]with_alpha[/code] to [code]false[/code] excludes alpha from the hexadecimal string.
Adds the given color to a list of color presets. The presets are displayed in the color picker and the user will be able to select them. Note: the presets list is only for [i]this[/i] color picker.
Adds the given color to a list of color presets. The presets are displayed in the color picker and the user will be able to select them.
[b]Note:[/b] the presets list is only for [i]this[/i] color picker.
</description>
</method>
<methodname="erase_preset">
@ -24,7 +25,7 @@
<argumentindex="0"name="color"type="Color">
</argument>
<description>
Remove the given color from the list of color presets of this color picker.
Removes the given color from the list of color presets of this color picker.
</description>
</method>
<methodname="get_presets"qualifiers="const">
@ -46,7 +47,7 @@
If [code]true[/code], shows an alpha channel slider (transparency).
If [code]true[/code], allows the color R, G, B component values to go beyond 1.0, which can be used for certain special operations that require it (like tinting without darkening or rendering sprites in HDR).
Concave polygon 2D shape resource for physics. It is made out of segments and is very optimal for complex polygonal concave collisions. It is really not advised to use for [RigidBody2D] nodes. A CollisionPolygon2D in convex decomposition mode (solids) or several convex objects are advised for that instead. Otherwise, a concave polygon 2D shape is better for static collisions.
Concave polygon 2D shape resource for physics. It is made out of segments and is optimal for complex polygonal concave collisions. However, it is not advised to use for [RigidBody2D] nodes. A CollisionPolygon2D in convex decomposition mode (solids) or several convex objects are advised for that instead. Otherwise, a concave polygon 2D shape is better for static collisions.
The main difference between a [ConvexPolygonShape2D] and a [ConcavePolygonShape2D] is that a concave polygon assumes it is concave and uses a more complex method of collision detection, and a convex one forces itself to be convex in order to speed up collision detection.
All User Interface nodes inherit from Control. A control's anchors and margins adapt its position and size relative to its parent.
</brief_description>
<description>
Base class for all User Interface or [i]UI[/i] related nodes. [Control] features a bounding rectangle that defines its extents, an anchor position relative to its parent control or the current viewport, and margins that represent an offset to the anchor. The margins update automatically when the node, any of its parents, or the screen size change.
Base class for all UI-related nodes. [Control] features a bounding rectangle that defines its extents, an anchor position relative to its parent control or the current viewport, and margins that represent an offset to the anchor. The margins update automatically when the node, any of its parents, or the screen size change.
For more information on Godot's UI system, anchors, margins, and containers, see the related tutorials in the manual. To build flexible UIs, you'll need a mix of UI elements that inherit from [Control] and [Container] nodes.
[b]User Interface nodes and input[/b]
Godot sends input events to the scene's root node first, by calling [method Node._input]. [method Node._input] forwards the event down the node tree to the nodes under the mouse cursor, or on keyboard focus. To do so, it calls [method MainLoop._input_event]. Call [method accept_event] so no other node receives the event. Once you accepted an input, it becomes handled so [method Node._unhandled_input] will not process it.
Only one [Control] node can be in keyboard focus. Only the node in focus will receive keyboard events. To get the focus, call [method grab_focus]. [Control] nodes lose focus when another node grabs it, or if you hide the node in focus.
Set [member mouse_filter] to [constant MOUSE_FILTER_IGNORE] to tell a [Control] node to ignore mouse or touch events. You'll need it if you place an icon on top of a button.
Sets [member mouse_filter] to [constant MOUSE_FILTER_IGNORE] to tell a [Control] node to ignore mouse or touch events. You'll need it if you place an icon on top of a button.
[Theme] resources change the Control's appearance. If you change the [Theme] on a [Control] node, it affects all of its children. To override some of the theme's parameters, call one of the [code]add_*_override[/code] methods, like [method add_font_override]. You can override the theme with the inspector.
</description>
<tutorials>
@ -135,9 +135,9 @@
extends Control
func can_drop_data(position, data):
# check position if it is relevant to you
# otherwise just check data
return typeof(data) == TYPE_DICTIONARY and data.has('expected')
# Check position if it is relevant to you
# Otherwise, just check data
return typeof(data) == TYPE_DICTIONARY and data.has("expected")
[/codeblock]
</description>
</method>
@ -154,10 +154,10 @@
extends ColorRect
func can_drop_data(position, data):
return typeof(data) == TYPE_DICTIONARY and data.has('color')
return typeof(data) == TYPE_DICTIONARY and data.has("color")
func drop_data(position, data):
color = data['color']
color = data["color"]
[/codeblock]
</description>
</method>
@ -221,7 +221,7 @@
<argumentindex="0"name="position"type="Vector2">
</argument>
<description>
Godot calls this method to get data that can be dragged and dropped onto controls that expect drop data. Returns null if there is no data to drag. Controls that want to receive drop data should implement [method can_drop_data] and [method drop_data]. [code]position[/code] is local to this control. Drag may be forced with [method force_drag].
Godot calls this method to get data that can be dragged and dropped onto controls that expect drop data. Returns [code]null[/code] if there is no data to drag. Controls that want to receive drop data should implement [method can_drop_data] and [method drop_data]. [code]position[/code] is local to this control. Drag may be forced with [method force_drag].
A preview that will follow the mouse that should represent the data can be set with [method set_drag_preview]. A good time to set the preview is in this method.
[codeblock]
extends Control
@ -862,16 +862,16 @@
Show the system's forbidden mouse cursor when the user hovers the node. Often a crossed circle.
Show the system's vertical resize mouse cursor when the user hovers the node. A doubleheaded vertical arrow. It tells the user they can resize the window or the panel vertically.
Show the system's vertical resize mouse cursor when the user hovers the node. A double-headed vertical arrow. It tells the user they can resize the window or the panel vertically.
Show the system's horizontal resize mouse cursor when the user hovers the node. A doubleheaded horizontal arrow. It tells the user they can resize the window or the panel horizontally.
Show the system's horizontal resize mouse cursor when the user hovers the node. A double-headed horizontal arrow. It tells the user they can resize the window or the panel horizontally.
Show the system's window resize mouse cursor when the user hovers the node. The cursor is a doubleheaded arrow that goes from the bottom left to the top right. It tells the user they can resize the window or the panel both horizontally and vertically.
Show the system's window resize mouse cursor when the user hovers the node. The cursor is a double-headed arrow that goes from the bottom left to the top right. It tells the user they can resize the window or the panel both horizontally and vertically.
Show the system's window resize mouse cursor when the user hovers the node. The cursor is a doubleheaded arrow that goes from the top left to the bottom right, the opposite of [constant CURSOR_BDIAGSIZE]. It tells the user they can resize the window or the panel both horizontally and vertically.
Show the system's window resize mouse cursor when the user hovers the node. The cursor is a double-headed arrow that goes from the top left to the bottom right, the opposite of [constant CURSOR_BDIAGSIZE]. It tells the user they can resize the window or the panel both horizontally and vertically.
Show the system's move mouse cursor when the user hovers the node. It shows 2 double-headed arrows at a 90 degree angle. It tells the user they can move a UI element freely.
@ -931,7 +931,7 @@
Snap all 4 anchors to a horizontal line that cuts the parent control in half. Use with [method set_anchors_preset].
Snap all 4 anchors to the respective corners of the parent control. Set all 4 margins to 0 after you applied this preset and the [Control] will fit its parent control. This is equivalent to to the "Full Rect" layout option in the editor. Use with [method set_anchors_preset].
Snap all 4 anchors to the respective corners of the parent control. Set all 4 margins to 0 after you applied this preset and the [Control] will fit its parent control. This is equivalent to the "Full Rect" layout option in the editor. Use with [method set_anchors_preset].
Tells the parent [Container] to align the node with its end, either the bottom or the right edge. It doesn't work with the fill or expand size flags. Use with [member size_flags_horizontal] and [member size_flags_vertical].
The control will receive mouse button input events through [method _gui_input] if clicked on. And the control will receive the [signal mouse_entered] and [signal mouse_exited] signals. These events are automatically marked as handled and they will not propagate further to other controls. This also results in blocking signals in other controls.
The control will receive mouse button input events through [method _gui_input] if clicked on. And the control will receive the [signal mouse_entered] and [signal mouse_exited] signals. These events are automatically marked as handled, and they will not propagate further to other controls. This also results in blocking signals in other controls.
The control will receive mouse button input events through [method _gui_input] if clicked on. And the control will receive the [signal mouse_entered] and [signal mouse_exited] signals. If this control does not handle the event, the parent control (if any) will be considered, and so on until there is no more parent control to potentially handle it. This also allows signals to fire in other controls. Even if no control handled it at all, the event will still be handled automatically, so unhandled input will not be fired.
The control will not receive mouse button input events through [method _gui_input]. Also the control will not receive the [signal mouse_entered] nor [signal mouse_exited] signals. This will not block other controls from receiving these events or firing the signals. Ignored events will not be handled automatically.
The control will not receive mouse button input events through [method _gui_input]. The control will also not receive the [signal mouse_entered] nor [signal mouse_exited] signals. This will not block other controls from receiving these events or firing the signals. Ignored events will not be handled automatically.
Convex Polygon Shape for 2D physics. A convex polygon, whatever its shape, is internally decomposed into as many convex polygons as needed to ensure all collision checks against it are always done on convex polygons (which are faster to check).
Convex polygon shape for 2D physics. A convex polygon, whatever its shape, is internally decomposed into as many convex polygons as needed to ensure all collision checks against it are always done on convex polygons (which are faster to check).
The main difference between a [ConvexPolygonShape2D] and a [ConcavePolygonShape2D] is that a concave polygon assumes it is concave and uses a more complex method of collision detection, and a convex one forces itself to be convex in order to speed up collision detection.
A 6-sided 3D texture typically used for faking reflections. It can be used to make an object look as if it's reflecting its surroundings. This usually delivers much better performance than other reflection methods.
A curve that can be saved and re-used for other objects. By default it ranges between [code]0[/code] and [code]1[/code] on the y-axis and positions points relative to the [code]0.5[/code] y-position.
A curve that can be saved and re-used for other objects. By default, it ranges between [code]0[/code] and [code]1[/code] on the Y axis and positions points relative to the [code]0.5[/code] Y position.
</description>
<tutorials>
</tutorials>
@ -60,7 +60,7 @@
<argumentindex="0"name="index"type="int">
</argument>
<description>
Returns the left [code]TangentMode[/code] for the point at [code]index[/code].
Returns the left [enum TangentMode] for the point at [code]index[/code].
Returns the y value for the point that would exist at x-position [code]offset[/code] along the curve.
Returns the Y value for the point that would exist at the X position [code]offset[/code] along the curve.
</description>
</method>
<methodname="interpolate_baked">
@ -114,7 +114,7 @@
<argumentindex="0"name="offset"type="float">
</argument>
<description>
Returns the y value for the point that would exist at x-position [code]offset[/code] along the curve using the baked cache. Bakes the curve's points if not already baked.
Returns the Y value for the point that would exist at the X position [code]offset[/code] along the curve using the baked cache. Bakes the curve's points if not already baked.
Adds a point to a curve, at [code]position[/code], with control points [code]in[/code] and [code]out[/code].
Adds a point to a curve at [code]position[/code], with control points [code]in[/code] and [code]out[/code].
If [code]at_position[/code] is given, the point is inserted before the point number [code]at_position[/code], moving that point (and every point after) after the inserted point. If [code]at_position[/code] is not given, or is an illegal value ([code]at_position <0[/code] or [code]at_position >= [method get_point_count][/code]), the point will be appended at the end of the point list.
Adds a point to a curve, at [code]position[/code], with control points [code]in[/code] and [code]out[/code].
Adds a point to a curve at [code]position[/code], with control points [code]in[/code] and [code]out[/code].
If [code]at_position[/code] is given, the point is inserted before the point number [code]at_position[/code], moving that point (and every point after) after the inserted point. If [code]at_position[/code] is not given, or is an illegal value ([code]at_position <0[/code] or [code]at_position >= [method get_point_count][/code]), the point will be appended at the end of the point list.
The spring joint's damping ratio. A value between [code]0[/code] and [code]1[/code]. When the two bodies move into different directions the system tries to align them to the spring axis again. A high [code]damping[/code] value forces the attached bodies to align faster. Default value: [code]1[/code]
The spring joint's damping ratio. A value between [code]0[/code] and [code]1[/code]. When the two bodies move into different directions the system tries to align them to the spring axis again. A high [code]damping[/code] value forces the attached bodies to align faster. Default value: [code]1[/code].
When the bodies attached to the spring joint move they stretch or squash it. The joint always tries to resize towards this length. Default value: [code]0[/code]
When the bodies attached to the spring joint move they stretch or squash it. The joint always tries to resize towards this length. Default value: [code]0[/code].
The higher the value, the less the bodies attached to the joint will deform it. The joint applies an opposing force to the bodies, the product of the stiffness multiplied by the size difference from its resting length. Default value: [code]20[/code]
The higher the value, the less the bodies attached to the joint will deform it. The joint applies an opposing force to the bodies, the product of the stiffness multiplied by the size difference from its resting length. Default value: [code]20[/code].
Erase a dictionary key/value pair by key. Returns [code]true[/code] if the given key was present in the dictionary, [code]false[/code] otherwise. Do not erase elements while iterating over the dictionary.
Erase a dictionary key/value pair by key. Returns [code]true[/code] if the given key was present in the dictionary, [code]false[/code] otherwise. Does not erase elements while iterating over the dictionary.
Returns the current value for the specified key in the [Dictionary]. If the key does not exist, the method returns the value of the optional default argument, or Null if it is omitted.
Returns the current value for the specified key in the [Dictionary]. If the key does not exist, the method returns the value of the optional default argument, or [code]null[/code] if it is omitted.
Directional light from a distance, as from the Sun.
</brief_description>
<description>
A directional light is a type of [Light] node that models an infinite number of parallel rays covering the entire scene. It is used for lights with strong intensity that are located far away from the scene to model sunlight or moonlight. The worldspace location of the DirectionalLight transform (origin) is ignored. Only the basis is used do determine light direction.
A directional light is a type of [Light] node that models an infinite number of parallel rays covering the entire scene. It is used for lights with strong intensity that are located far away from the scene to model sunlight or moonlight. The worldspace location of the DirectionalLight transform (origin) is ignored. Only the basis is used to determine light direction.
The distance from camera to shadow split 1. Relative to [member directional_shadow_max_distance]. Only used when [member directional_shadow_mode] is one of the [code]SHADOW_PARALLEL_*_SPLITS[/code] constants.
The distance from camera to shadow split 1. Relative to [member directional_shadow_max_distance]. Only used when [member directional_shadow_mode] is [code]SHADOW_PARALLEL_2_SPLITS[/code] or [code]SHADOW_PARALLEL_4_SPLITS[/code].
The distance from shadow split 1 to split 2. Relative to [member directional_shadow_max_distance]. Only used when [member directional_shadow_mode] is [code]SHADOW_PARALLEL_3_SPLITS[/code] or [code]SHADOW_PARALLEL_4_SPLITS[/code].
The distance from shadow split 1 to split 2. Relative to [member directional_shadow_max_distance]. Only used when [member directional_shadow_mode] is [code]SHADOW_PARALLEL_2_SPLITS[/code] or [code]SHADOW_PARALLEL_4_SPLITS[/code].
The distance from shadow split 2 to split 3. Relative to [member directional_shadow_max_distance]. Only used when [member directional_shadow_mode] is [code]SHADOW_PARALLEL_4_SPLITS[/code].
Change the currently opened directory to the one passed as an argument. The argument can be relative to the current directory (e.g. [code]newdir[/code] or [code]../newdir[/code]), or an absolute path (e.g. [code]/tmp/newdir[/code] or [code]res://somedir/newdir[/code]).
The method returns one of the error code constants defined in [@GlobalScope] (OK or ERR_*).
Changes the currently opened directory to the one passed as an argument. The argument can be relative to the current directory (e.g. [code]newdir[/code] or [code]../newdir[/code]), or an absolute path (e.g. [code]/tmp/newdir[/code] or [code]res://somedir/newdir[/code]).
The method returns one of the error code constants defined in [@GlobalScope] ([code]OK[/code] or [code]ERR_*[/code]).
</description>
</method>
<methodname="copy">
@ -44,8 +44,8 @@
<argumentindex="1"name="to"type="String">
</argument>
<description>
Copy the [i]from[/i] file to the [i]to[/i] destination. Both arguments should be paths to files, either relative or absolute. If the destination file exists and is not access-protected, it will be overwritten.
Returns one of the error code constants defined in [@GlobalScope] (OK, FAILED or ERR_*).
Copies the [code]from[/code] file to the [code]to[/code] destination. Both arguments should be paths to files, either relative or absolute. If the destination file exists and is not access-protected, it will be overwritten.
Returns one of the error code constants defined in [@GlobalScope] ([code]OK[/code], [code]FAILED[/code] or [code]ERR_*[/code]).
</description>
</method>
<methodname="current_is_dir"qualifiers="const">
@ -93,14 +93,14 @@
<argumentindex="0"name="idx"type="int">
</argument>
<description>
On Windows, return the name of the drive (partition) passed as an argument (e.g. [code]C:[/code]). On other platforms, or if the requested drive does not existed, the method returns an empty String.
On Windows, returns the name of the drive (partition) passed as an argument (e.g. [code]C:[/code]). On other platforms, or if the requested drive does not existed, the method returns an empty String.
</description>
</method>
<methodname="get_drive_count">
<returntype="int">
</return>
<description>
On Windows, return the number of drives (partitions) mounted on the current filesystem. On other platforms, the method returns 0.
On Windows, returns the number of drives (partitions) mounted on the current filesystem. On other platforms, the method returns 0.
</description>
</method>
<methodname="get_next">
@ -115,7 +115,7 @@
<returntype="int">
</return>
<description>
On Unix desktop systems, return the available space on the current directory's disk. On other platforms, this information is not available and the method returns 0 or -1.
On UNIX desktop systems, returns the available space on the current directory's disk. On other platforms, this information is not available and the method returns 0 or -1.
Initialize the stream used to list all files and directories using the [method get_next] function, closing the current opened stream if needed. Once the stream has been processed, it should typically be closed with [method list_dir_end].
Initializes the stream used to list all files and directories using the [method get_next] function, closing the current opened stream if needed. Once the stream has been processed, it should typically be closed with [method list_dir_end].
If you pass [code]skip_navigational[/code], then [code].[/code] and [code]..[/code] would be filtered out.
If you pass [code]skip_hidden[/code], then hidden files would be filtered out.
</description>
@ -135,7 +135,7 @@
<returntype="void">
</return>
<description>
Close the current stream opened with [method list_dir_begin] (whether it has been fully processed with [method get_next] or not does not matter).
Closes the current stream opened with [method list_dir_begin] (whether it has been fully processed with [method get_next] or not does not matter).
</description>
</method>
<methodname="make_dir">
@ -144,8 +144,8 @@
<argumentindex="0"name="path"type="String">
</argument>
<description>
Create a directory. The argument can be relative to the current directory, or an absolute path. The target directory should be placed in an already existing directory (to create the full path recursively, see [method make_dir_recursive]).
The method returns one of the error code constants defined in [@GlobalScope] (OK, FAILED or ERR_*).
Creates a directory. The argument can be relative to the current directory, or an absolute path. The target directory should be placed in an already existing directory (to create the full path recursively, see [method make_dir_recursive]).
The method returns one of the error code constants defined in [@GlobalScope] ([code]OK[/code], [code]FAILED[/code] or [code]ERR_*[/code]).
</description>
</method>
<methodname="make_dir_recursive">
@ -154,8 +154,8 @@
<argumentindex="0"name="path"type="String">
</argument>
<description>
Create a target directory and all necessary intermediate directories in its path, by calling [method make_dir] recursively. The argument can be relative to the current directory, or an absolute path.
Returns one of the error code constants defined in [@GlobalScope] (OK, FAILED or ERR_*).
Creates a target directory and all necessary intermediate directories in its path, by calling [method make_dir] recursively. The argument can be relative to the current directory, or an absolute path.
Returns one of the error code constants defined in [@GlobalScope] ([code]0K[/code], [code]FAILED[/code] or [code]ERR_*[/code]).
</description>
</method>
<methodname="open">
@ -164,8 +164,8 @@
<argumentindex="0"name="path"type="String">
</argument>
<description>
Open an existing directory of the filesystem. The [i]path[/i] argument can be within the project tree ([code]res://folder[/code]), the user directory ([code]user://folder[/code]) or an absolute path of the user filesystem (e.g. [code]/tmp/folder[/code] or [code]C:\tmp\folder[/code]).
The method returns one of the error code constants defined in [@GlobalScope] (OK or ERR_*).
Opens an existing directory of the filesystem. The [code]path[/code] argument can be within the project tree ([code]res://folder[/code]), the user directory ([code]user://folder[/code]) or an absolute path of the user filesystem (e.g. [code]/tmp/folder[/code] or [code]C:\tmp\folder[/code]).
The method returns one of the error code constants defined in [@GlobalScope] ([code]OK[/code] or [code]ERR_*[/code]).
</description>
</method>
<methodname="remove">
@ -174,8 +174,8 @@
<argumentindex="0"name="path"type="String">
</argument>
<description>
Delete the target file or an empty directory. The argument can be relative to the current directory, or an absolute path. If the target directory is not empty, the operation will fail.
Returns one of the error code constants defined in [@GlobalScope] (OK or FAILED).
Deletes the target file or an empty directory. The argument can be relative to the current directory, or an absolute path. If the target directory is not empty, the operation will fail.
Returns one of the error code constants defined in [@GlobalScope] ([code]OK[/code] or [code]FAILED[/code]).
</description>
</method>
<methodname="rename">
@ -186,8 +186,8 @@
<argumentindex="1"name="to"type="String">
</argument>
<description>
Rename (move) the [i]from[/i] file to the [i]to[/i] destination. Both arguments should be paths to files, either relative or absolute. If the destination file exists and is not access-protected, it will be overwritten.
Returns one of the error code constants defined in [@GlobalScope] (OK or FAILED).
Renames (move) the [code]from[/code] file to the [code]to[/code] destination. Both arguments should be paths to files, either relative or absolute. If the destination file exists and is not access-protected, it will be overwritten.
Returns one of the error code constants defined in [@GlobalScope] ([code]OK[/code] or [code]FAILED[/code]).
EditorImportPlugins provide a way to extend the editor's resource import functionality. Use them to import resources from custom files or to provide alternatives to the editor's existing importers. Register your [EditorPlugin] with [method EditorPlugin.add_import_plugin].
EditorImportPlugins work by associating with specific file extensions and a resource type. See [method get_recognized_extensions] and [method get_resource_type]). They may optionally specify some import presets that affect the import process. EditorImportPlugins are responsible for creating the resources and saving them in the [code].import[/code] directory.
EditorImportPlugins work by associating with specific file extensions and a resource type. See [method get_recognized_extensions] and [method get_resource_type]. They may optionally specify some import presets that affect the import process. EditorImportPlugins are responsible for creating the resources and saving them in the [code].import[/code] directory.
Below is an example EditorImportPlugin that imports a [Mesh] from a file with the extension ".special" or ".spec":
[codeblock]
tool
@ -41,7 +41,7 @@
return FAILED
var mesh = Mesh.new()
# Fill the Mesh with data read in 'file', left as exercise to the reader
# Fill the Mesh with data read in "file", left as an exercise to the reader
var filename = save_path + "." + get_save_extension()
ResourceSaver.save(filename, mesh)
@ -58,21 +58,21 @@
<argumentindex="0"name="preset"type="int">
</argument>
<description>
Get the options and default values for the preset at this index. Returns an Array of Dictionaries with the following keys: [code]name[/code], [code]default_value[/code], [code]property_hint[/code] (optional), [code]hint_string[/code] (optional), [code]usage[/code] (optional).
Gets the options and default values for the preset at this index. Returns an Array of Dictionaries with the following keys: [code]name[/code], [code]default_value[/code], [code]property_hint[/code] (optional), [code]hint_string[/code] (optional), [code]usage[/code] (optional).
Get the order of this importer to be run when importing resources. Higher values will be called later. Use this to ensure the importer runs after the dependencies are already imported.
Gets the order of this importer to be run when importing resources. Higher values will be called later. Use this to ensure the importer runs after the dependencies are already imported.
Get the number of initial presets defined by the plugin. Use [method get_import_options] to get the default options for the preset and [method get_preset_name] to get the name of the preset.
Gets the number of initial presets defined by the plugin. Use [method get_import_options] to get the default options for the preset and [method get_preset_name] to get the name of the preset.
Plugins are used by the editor to extend functionality. The most common types of plugins are those which edit a given node or resource type, import plugins and export plugins. Also see [EditorScript] to add functions to the editor.
Plugins are used by the editor to extend functionality. The most common types of plugins are those which edit a given node or resource type, import plugins and export plugins. See also [EditorScript] to add functions to the editor.
Add a script at [code]path[/code] to the Autoload list as [code]name[/code].
Adds a script at [code]path[/code] to the Autoload list as [code]name[/code].
</description>
</method>
<methodname="add_control_to_bottom_panel">
@ -29,7 +29,7 @@
<argumentindex="1"name="title"type="String">
</argument>
<description>
Add a control to the bottom panel (together with Output, Debug, Animation, etc). Returns a reference to the button added. It's up to you to hide/show the button when needed. When your plugin is deactivated, make sure to remove your custom control with [method remove_control_from_bottom_panel] and free it with [code]queue_free()[/code].
Adds a control to the bottom panel (together with Output, Debug, Animation, etc). Returns a reference to the button added. It's up to you to hide/show the button when needed. When your plugin is deactivated, make sure to remove your custom control with [method remove_control_from_bottom_panel] and free it with [code]queue_free()[/code].
</description>
</method>
<methodname="add_control_to_container">
@ -40,7 +40,7 @@
<argumentindex="1"name="control"type="Control">
</argument>
<description>
Add a custom control to a container (see CONTAINER_* enum). There are many locations where custom controls can be added in the editor UI.
Adds a custom control to a container (see [code]CONTAINER_*[/code] enum). There are many locations where custom controls can be added in the editor UI.
Please remember that you have to manage the visibility of your custom controls yourself (and likely hide it after adding it).
When your plugin is deactivated, make sure to remove your custom control with [method remove_control_from_container] and free it with [code]queue_free()[/code].
</description>
@ -53,7 +53,7 @@
<argumentindex="1"name="control"type="Control">
</argument>
<description>
Add the control to a specific dock slot (see DOCK_* enum for options).
Adds the control to a specific dock slot (see [code]DOCK_*[/code] enum for options).
If the dock is repositioned and as long as the plugin is active, the editor will save the dock position on further sessions.
When your plugin is deactivated, make sure to remove your custom control with [method remove_control_from_docks] and free it with [code]queue_free()[/code].
</description>
@ -70,9 +70,9 @@
<argumentindex="3"name="icon"type="Texture">
</argument>
<description>
Add a custom type, which will appear in the list of nodes or resources. An icon can be optionally passed.
Adds a custom type, which will appear in the list of nodes or resources. An icon can be optionally passed.
When given node or resource is selected, the base type will be instanced (ie, "Spatial", "Control", "Resource"), then the script will be loaded and set to this object.
You can use the virtual method [method handles] to check if your custom object is being edited by checking the script or using 'is' keyword.
You can use the virtual method [method handles] to check if your custom object is being edited by checking the script or using the [code]is[/code] keyword.
During run-time, this will be a simple object with a script so this function does not need to be called then.
Add a custom menu to 'Project > Tools' as [code]name[/code] that calls [code]callback[/code] on an instance of [code]handler[/code] with a parameter [code]ud[/code] when user activates it.
Adds a custom menu to [b]Project > Tools[/b] as [code]name[/code] that calls [code]callback[/code] on an instance of [code]handler[/code] with a parameter [code]ud[/code] when user activates it.
</description>
</method>
<methodname="add_tool_submenu_item">
@ -234,7 +234,7 @@
<returntype="PoolStringArray">
</return>
<description>
This is for editors that edit scriptbased objects. You can return a list of breakpoints in the format (script:line), for example: res://path_to_script.gd:25
This is for editors that edit script-based objects. You can return a list of breakpoints in the format ([code]script:line[/code]), for example: [code]res://path_to_script.gd:25[/code].
</description>
</method>
<methodname="get_editor_interface">
@ -260,21 +260,22 @@
<returntype="ScriptCreateDialog">
</return>
<description>
Gets the Editor's dialogue used for making scripts. Note that users can configure it before use.
Gets the Editor's dialogue used for making scripts.
[b]Note:[/b] Users can configure it before use.
</description>
</method>
<methodname="get_state"qualifiers="virtual">
<returntype="Dictionary">
</return>
<description>
Get the state of your plugin editor. This is used when saving the scene (so state is kept when opening it again) and for switching tabs (so state can be restored when the tab returns).
Gets the state of your plugin editor. This is used when saving the scene (so state is kept when opening it again) and for switching tabs (so state can be restored when the tab returns).
</description>
</method>
<methodname="get_undo_redo">
<returntype="UndoRedo">
</return>
<description>
Get the undo/redo object. Most actions in the editor can be undoable, so use this object to make sure this happens when it's worth it.
Gets the undo/redo object. Most actions in the editor can be undoable, so use this object to make sure this happens when it's worth it.
Get the GUI layout of the plugin. This is used to save the project's editor layout when [method queue_save_layout] is called or the editor layout was changed(For example changing the position of a dock).
Gets the GUI layout of the plugin. This is used to save the project's editor layout when [method queue_save_layout] is called or the editor layout was changed(For example changing the position of a dock).
</description>
</method>
<methodname="handles"qualifiers="virtual">
@ -299,7 +300,7 @@
<returntype="bool">
</return>
<description>
Returns [code]true[/code] if this is a main screen editor plugin (it goes in the workspaces selector together with '2D', '3D', and 'Script').
Returns [code]true[/code] if this is a main screen editor plugin (it goes in the workspace selector together with [b]2D[/b], [b]3D[/b], [b]Script[/b] and [b]AssetLib[/b]).
</description>
</method>
<methodname="hide_bottom_panel">
@ -339,7 +340,7 @@
<argumentindex="0"name="name"type="String">
</argument>
<description>
Remove an Autoload [code]name[/code] from the list.
Removes an Autoload [code]name[/code] from the list.
</description>
</method>
<methodname="remove_control_from_bottom_panel">
@ -348,7 +349,7 @@
<argumentindex="0"name="control"type="Control">
</argument>
<description>
Remove the control from the bottom panel. You have to manually [code]queue_free()[/code] the control.
Removes the control from the bottom panel. You have to manually [code]queue_free()[/code] the control.
</description>
</method>
<methodname="remove_control_from_container">
@ -359,7 +360,7 @@
<argumentindex="1"name="control"type="Control">
</argument>
<description>
Remove the control from the specified container. You have to manually [code]queue_free()[/code] the control.
Removes the control from the specified container. You have to manually [code]queue_free()[/code] the control.
</description>
</method>
<methodname="remove_control_from_docks">
@ -368,7 +369,7 @@
<argumentindex="0"name="control"type="Control">
</argument>
<description>
Remove the control from the dock. You have to manually [code]queue_free()[/code] the control.
Removes the control from the dock. You have to manually [code]queue_free()[/code] the control.
</description>
</method>
<methodname="remove_custom_type">
@ -377,7 +378,7 @@
<argumentindex="0"name="type"type="String">
</argument>
<description>
Remove a custom type added by [method add_custom_type]
Removes a custom type added by [method add_custom_type].
</description>
</method>
<methodname="remove_export_plugin">
@ -426,7 +427,7 @@
<argumentindex="0"name="name"type="String">
</argument>
<description>
Removes a menu [code]name[/code] from 'Project > Tools'.
Removes a menu [code]name[/code] from [b]Project > Tools[/b].
Emitted when user changes the workspace (2D, 3D, Script, AssetLib). Also works with custom screens defined by plugins.
Emitted when user changes the workspace ([b]2D[/b], [b]3D[/b], [b]Script[/b], [b]AssetLib[/b]). Also works with custom screens defined by plugins.
</description>
</signal>
<signalname="resource_saved">
@ -493,7 +494,7 @@
<argumentindex="0"name="scene_root"type="Node">
</argument>
<description>
Emitted when the scene is changed in the editor. The argument will return the root node of the scene that has just become active. If this scene is new and empty, the argument will be null.
Emitted when the scene is changed in the editor. The argument will return the root node of the scene that has just become active. If this scene is new and empty, the argument will be [code]null[/code].
</description>
</signal>
<signalname="scene_closed">
Some files were not shown because too many files have changed in this diff
Show More