opengl – How do I make the 3rd person camera follow the space ship in my Space Shooter game?

I am making a 3D space shooter game using OpenGL and Bullet Physics. I am having a hard time with the 3rd person Camera though. If you take a look at the video above you can see that the camera follows the spaceship rotating on the X axis without a problem and rotating on Z axis also without a problem when the spaceship is is oriented towards the horizon and parallel to the ground. When I roll to the right and move the spaceship up I am totally losing it from the camera. The code for my camera is as follows:

float CalculateHorizontalDistance() {
    if (!mpPlayer) return 0.f;

    return distanceFromPlayer * glm::cos(glm::radians(-UpAngle));
}

float CalculateVerticalDistance() {
    if (!mpPlayer) return 0.f;

    return distanceFromPlayer * glm::sin(glm::radians(-UpAngle));
}

void calculateCameraPosition(float horizDistance, float verticDistance)
{
    if (!mpPlayer) return;

    glm::vec3 playerFront = mpPlayer->GetFront();
    glm::vec3 playerPos = mpPlayer->GetPosition();

    Position = playerPos + ((-playerFront) * distanceFromPlayer);

    UpAngle = 180.f - mpPlayer->GetRoll();
    //RightAngle = -mpPlayer->GetPitch();
    RollAngle =  mpPlayer->GetYaw() - 180.f;

    //float theta = mpPlayer->GetRotationY() + angleAroundPlayer;
    //float offsetX = horizDistance * glm::sin(glm::radians(theta));
    //float offsetZ = horizDistance * glm::cos(glm::radians(theta));

    //Position.x = mpPlayer->GetPosition().x - offsetX;
    //Position.y = mpPlayer->GetPosition().y + verticDistance;
    //Position.z = mpPlayer->GetPosition().z - offsetZ;
}

The above code is used to calculate the position and rotation of the camera. The commented out code was used to calculate the position based on trigonometrical calculations for the x and z sides of the triangle created from the camera position to the player position. This didn’t work right because I could never set the camera behind the spaceship, only the position worked well.

On the non-commented out code I use as a camera position the player position – back vector * offset. This works fine for the simple purpose that the camera is always on the back side of the spaceship. I also update pitch and roll which works almost fine and here is where I need help mostly to get this right. I also never update the yaw of the camera.

This is how I get yaw, pitch and roll from Bullet Physics rigid body of the player (spaceship):

btScalar yaw;
btScalar pitch;
btScalar roll;
body->getCenterOfMassTransform().getBasis().getEulerZYX(yaw, pitch, roll, 1);

The following code is how the orientation of the camera is calculated :

void updateCameraVectors()
{
    // Yaw
    glm::quat aroundY = glm::angleAxis(glm::radians(-RightAngle), glm::vec3(0, 1, 0));

    // Pitch
    glm::quat aroundX = glm::angleAxis(glm::radians(UpAngle), glm::vec3(1, 0, 0));

    // Roll
    glm::quat aroundZ = glm::angleAxis(glm::radians(RollAngle), glm::vec3(0, 0, 1));

    Orientation = aroundY * aroundX * aroundZ;

    glm::quat qF = Orientation * glm::quat(0, 0, 0, -1) * glm::conjugate(Orientation);
    Front = { qF.x, qF.y, qF.z };
    Right = glm::normalize(glm::cross(Front, WorldUp));
    Up = glm::normalize(glm::cross(Right, Front));
}

and lastly how the view matrix of the camera is calculated:

glm::mat4 GetViewMatrix()
{
    // You should know the camera move reversely relative to the user input.
    // That's the point of Graphics Camera

    glm::quat reverseOrient = glm::conjugate(Orientation);
    glm::mat4 rot = glm::mat4_cast(reverseOrient);
    glm::mat4 translation = glm::translate(glm::mat4(1.0), -Position);

    return rot * translation;
}

Can someone help me fix the rotational problems that I am facing? I am open to modify also the Camera Class to make it work better with Bullet’s quaternions instead of the Euler’s angles that I am trying to use. Any code or ideas are very welcomed. Thank you for reading through.

Build PBN 100 Permanent DA 50+ Homepage high quality do follow Backlink for $50

Build PBN 100 Permanent DA 50+ Homepage high quality do follow Backlink

A PBN Link Building Service That Works Buy Quality Backlinks Today 1st Page Proof Of Ranking I will manually post and provide you Dofollow PBN on High authority sites using the Perfectly Natural Link Building strategy.

Now is your chance to enjoy quality PBN posts at an affordable price. Do your websites a favor and use these powerful links to boost yourself in the SERPs!

Main features:

100% Approval Rate 100% Manual work.

Link will be Do-follow and permanent.

Comments are closed, no spamming at all.

All domains are well indexed on Google.

No ads or any affiliate links or footprint, you will get maximum benefits for every click.

Clean Neighborhood

100% Homepage Post

100% rank your site

100% boost your website ranking

100% Traffic

100% Satisfaction Guaranteed

.

Unable to follow Bitcoin stock?

I tried to follow the Bitcoin stock on Google Finance & where the follow/add button should be, it instead shows “CCY”. I tried Bitcoin Cash, Ethereum & Litecoin too with the same result, but for some reason the button is there for Ripple. I’m able to see the Bitcoin (& other mentioned) stocks, but am unable to follow them. What does “CCY” mean & why am I unable to follow the stocks mentioned above?

python – DQN, DDQN, DDDQN in tensorflow (follow up)

This is a follow up to my previous post so I’ll briefly state what the code does and you may refer back to the previous version for more details. This is a self-learning deep reinforcement learning agent that learns by itself how to play atari games. I included a demonstration in my previous post to the agent playing a pong game after being trained.

Updates

  • Double DQN (DDQN).
  • Dueling DQN (DDDQN).
  • ReplayBuffer class.
  • N-step sampling support.
  • Multiple environment support(same game each).
  • tf.function adjustment and custom training method instead of tf.keras.models.Model.fit()
  • 3x boost in training speed 80 -> 200 frames/s on a Tesla T4, (3 environments + defaults)

Please check the cProfile report for 100,000 steps run, there are bottlenecks that may be improved.

dqn.py

import os
from collections import defaultdict, deque
from time import perf_counter, sleep

import cv2
import gym
import numpy as np
import tensorflow as tf
import wandb
from tensorflow.keras.layers import Add, Conv2D, Dense, Flatten, Input, Lambda
from tensorflow.keras.models import Model
from tensorflow.keras.optimizers import Adam

from utils import ReplayBuffer, create_gym_env


class DQN:
    def __init__(
        self,
        envs,
        buffer_max_size=10000,
        buffer_initial_size=None,
        buffer_batch_size=32,
        checkpoint=None,
        reward_buffer_size=100,
        epsilon_start=1.0,
        epsilon_end=0.02,
        transition_steps=1,
        gamma=0.99,
        double=False,
        duel=False,
        cnn_fc_units=512,
    ):
        """
        Initialize agent settings.
        Args:
            envs: gym environment that returns states as atari frames.
            buffer_max_size: Replay buffer maximum size.
            buffer_batch_size: Batch size when any buffer from the given buffers
                get_sample() method is called.
            checkpoint: Path to .tf filename under which the trained model will be saved.
            reward_buffer_size: Size of the reward buffer that will hold the last n total
                rewards which will be used for calculating the mean reward.
            epsilon_start: Start value of epsilon that regulates exploration during training.
            epsilon_end: End value of epsilon which represents the minimum value of epsilon
                which will not be decayed further when reached.
            transition_steps: n-step transition for example given s1, s2, s3, s4 and n_step = 4,
                transition will be s1 -> s4 (defaults to 1, s1 -> s2)
            gamma: Discount factor used for gradient updates.
            double: If True, DDQN is used for gradient updates.
            duel: If True, a dueling extension will be added to the model.
            cnn_fc_units: Number of units passed to Dense layer.
        """
        assert envs, 'No Environments given'
        self.n_envs = len(envs)
        self.envs = envs
        self.env_ids = (id(env) for env in self.envs)
        replay_buffers = (
            ReplayBuffer(
                buffer_max_size,
                buffer_initial_size,
                transition_steps,
                gamma,
                buffer_batch_size,
            )
            for _ in range(self.n_envs)
        )
        self.buffers = {
            env_id: buffer for (env_id, buffer) in zip(self.env_ids, replay_buffers)
        }
        self.main_model = self.create_cnn_model(duel, cnn_fc_units)
        self.target_model = self.create_cnn_model(duel, cnn_fc_units)
        self.buffer_batch_size = buffer_batch_size
        self.checkpoint_path = checkpoint
        self.total_rewards = deque(maxlen=reward_buffer_size * self.n_envs)
        self.best_reward = -float('inf')
        self.mean_reward = -float('inf')
        self.states = {}
        self.reset_envs()
        self.steps = 0
        self.frame_speed = 0
        self.last_reset_step = 0
        self.epsilon_start = self.epsilon = epsilon_start
        self.epsilon_end = epsilon_end
        self.games = 0
        self.transition_steps = transition_steps
        self.gamma = gamma
        self.double = double
        self.batch_indices = tf.range(
            self.buffer_batch_size * self.n_envs, dtype=tf.int64
        )(:, tf.newaxis)
        self.episode_rewards = defaultdict(lambda: 0)

    def create_cnn_model(self, duel=False, fc_units=512):
        """
        Create convolutional model.
        Args:
            duel: If True, a dueling extension will be added to the model.
            fc_units: Number of units passed to Dense layer.

        Returns:
            tf.keras.models.Model
        """
        x0 = Input(self.envs(0).observation_space.shape)
        x = Conv2D(32, 8, 4, activation='relu')(x0)
        x = Conv2D(64, 4, 2, activation='relu')(x)
        x = Conv2D(64, 3, 1, activation='relu')(x)
        x = Flatten()(x)
        fc1 = Dense(units=fc_units, activation='relu')(x)
        if not duel:
            output = Dense(units=self.envs(0).action_space.n)(fc1)
        else:
            fc2 = Dense(units=fc_units, activation='relu')(x)
            advantage = Dense(units=self.envs(0).action_space.n)(fc1)
            advantage = Lambda(
                lambda a: a - tf.expand_dims(tf.reduce_mean(a, axis=1), -1)
            )(advantage)
            value = Dense(units=1)(fc2)
            output = Add()((advantage, value))
        model = Model(x0, output)
        model.call = tf.function(model.call)
        return model

    def reset_envs(self):
        """
        Reset all environments in self.envs
        Returns:
            None
        """
        for env_id, env in zip(self.env_ids, self.envs):
            self.states(env_id) = env.reset()

    def get_action(self, state, training=True):
        """
        Generate action following an epsilon-greedy policy.
        Args:
            state: Atari frame that needs an action.
            training: If False, no use of randomness will apply.
        Returns:
            A random action or Q argmax.
        """
        if training and np.random.random() < self.epsilon:
            return self.envs(0).action_space.sample()
        q_values = self.main_model(np.expand_dims(state, 0)).numpy()
        return np.argmax(q_values)

    def get_action_indices(self, actions):
        """
        Get indices that will be passed to tf.gather_nd()
        Args:
            actions: Action tensor of shape self.batch_size

        Returns:
            Indices.
        """
        return tf.concat(
            (self.batch_indices, tf.cast(actions(:, tf.newaxis), tf.int64)), -1
        )

    @tf.function
    def get_targets(self, batch):
        """
        Get target values for gradient updates.
        Args:
            batch: A batch of observations in the form of
                ((states), (actions), (rewards), (dones), (next states))
        Returns:
            None
        """
        states, actions, rewards, dones, new_states = batch
        q_states = self.main_model(states)
        if self.double:
            new_state_actions = tf.argmax(self.main_model(new_states), 1)
            new_state_q_values = self.target_model(new_states)
            a = self.get_action_indices(new_state_actions)
            new_state_values = tf.gather_nd(new_state_q_values, a)
        else:
            new_state_values = tf.reduce_max(self.target_model(new_states), axis=1)
        new_state_values = tf.where(
            dones, tf.constant(0, new_state_values.dtype), new_state_values
        )
        target_values = tf.identity(q_states)
        target_value_update = new_state_values * (
            self.gamma ** self.transition_steps
        ) + tf.cast(rewards, tf.float32)
        indices = self.get_action_indices(actions)
        target_values = tf.tensor_scatter_nd_update(
            target_values, indices, target_value_update
        )
        return target_values

    def checkpoint(self):
        """
        Save model weights if current reward > best reward.
        Returns:
            None
        """
        if self.best_reward < self.mean_reward:
            print(f'Best reward updated: {self.best_reward} -> {self.mean_reward}')
            if self.checkpoint_path:
                self.main_model.save_weights(self.checkpoint_path)
        self.best_reward = max(self.mean_reward, self.best_reward)

    def display_metrics(self):
        """
        Display progress metrics to the console.
        Returns:
            None
        """
        display_titles = (
            'frame',
            'games',
            'speed',
            'mean reward',
            'best reward',
            'epsilon',
            'episode rewards',
        )
        display_values = (
            self.steps,
            self.games,
            f'{round(self.frame_speed)} steps/s',
            self.mean_reward,
            self.best_reward,
            np.around(self.epsilon, 2),
            list(self.total_rewards)(-self.n_envs :),
        )
        display = (
            f'{title}: {value}' for title, value in zip(display_titles, display_values)
        )
        print(', '.join(display))

    def update_metrics(self, start_time):
        """
        Update progress metrics.
        Args:
            start_time: Episode start time, used for calculating fps.
        Returns:
            None
        """
        self.checkpoint()
        self.frame_speed = (self.steps - self.last_reset_step) / (
            perf_counter() - start_time
        )
        self.last_reset_step = self.steps
        self.mean_reward = np.around(np.mean(self.total_rewards), 2)

    def fill_buffers(self):
        """
        Fill self.buffer up to its initial size.
        """
        total_size = sum(buffer.initial_size for buffer in self.buffers.values())
        sizes = {}
        for i, env in enumerate(self.envs, 1):
            env_id = id(env)
            buffer = self.buffers(env_id)
            state = self.states(env_id)
            while len(buffer) < buffer.initial_size:
                action = env.action_space.sample()
                new_state, reward, done, _ = env.step(action)
                buffer.append((state, action, reward, done, new_state))
                state = new_state
                if done:
                    state = env.reset()
                sizes(env_id) = len(buffer)
                filled = sum(sizes.values())
                complete = round((filled / total_size) * 100, 2)
                print(
                    f'rFilling replay buffer {i}/{self.n_envs} ==> {complete}% | '
                    f'{filled}/{total_size}',
                    end='',
                )
        print()
        self.reset_envs()

    @tf.function
    def train_on_batch(self, model, x, y, sample_weight=None):
        """
        Train on a given batch.
        Args:
            model: tf.keras.Model
            x: States tensor
            y: Targets tensor
            sample_weight: sample_weight passed to model.compiled_loss()

        Returns:
            None
        """
        with tf.GradientTape() as tape:
            y_pred = model(x, training=True)
            loss = model.compiled_loss(
                y, y_pred, sample_weight, regularization_losses=model.losses
            )
        model.optimizer.minimize(loss, model.trainable_variables, tape=tape)
        model.compiled_metrics.update_state(y, y_pred, sample_weight)

    def get_training_batch(self, done_envs):
        """
        Join batches for each environment in self.envs
        Args:
            done_envs: A flag list for marking episode ends.

        Returns:
            batch: A batch of observations in the form of
                ((states), (actions), (rewards), (dones), (next states))
        """
        batches = ()
        for env_id, env in zip(self.env_ids, self.envs):
            state = self.states(env_id)
            action = self.get_action(state)
            buffer = self.buffers(env_id)
            new_state, reward, done, _ = env.step(action)
            self.steps += 1
            self.episode_rewards(env_id) += reward
            buffer.append((state, action, reward, done, new_state))
            self.states(env_id) = new_state
            if done:
                done_envs.append(1)
                self.total_rewards.append(self.episode_rewards(env_id))
                self.games += 1
                self.episode_rewards(env_id) = 0
                self.states(env_id) = env.reset()
            batch = buffer.get_sample()
            batches.append(batch)
        if len(batches) > 1:
            return (np.concatenate(item) for item in zip(*batches))
        return batches(0)

    def fit(
        self,
        target_reward,
        decay_n_steps=150000,
        learning_rate=1e-4,
        update_target_steps=1000,
        monitor_session=None,
        weights=None,
        max_steps=None,
    ):
        """
        Train agent on a supported environment
        Args:
            target_reward: Target reward, if achieved, the training will stop
            decay_n_steps: Maximum steps that determine epsilon decay rate.
            learning_rate: Model learning rate shared by both main and target networks.
            update_target_steps: Update target model every n steps.
            monitor_session: Session name to use for monitoring the training with wandb.
            weights: Path to .tf trained model weights to continue training.
            max_steps: Maximum number of steps, if reached the training will stop.
        Returns:
            None
        """
        if monitor_session:
            wandb.init(name=monitor_session)
        optimizer = Adam(learning_rate)
        if weights:
            self.main_model.load_weights(weights)
            self.target_model.load_weights(weights)
        self.main_model.compile(optimizer, loss='mse')
        self.target_model.compile(optimizer, loss='mse')
        self.fill_buffers()
        done_envs = ()
        start_time = perf_counter()
        while True:
            if len(done_envs) == self.n_envs:
                self.update_metrics(start_time)
                start_time = perf_counter()
                self.display_metrics()
                done_envs.clear()
            if self.mean_reward >= target_reward:
                print(f'Reward achieved in {self.steps} steps!')
                break
            if max_steps and self.steps >= max_steps:
                print(f'Maximum steps exceeded')
                break
            self.epsilon = max(
                self.epsilon_end, self.epsilon_start - self.steps / decay_n_steps
            )
            training_batch = self.get_training_batch(done_envs)
            targets = self.get_targets(training_batch)
            self.train_on_batch(self.main_model, training_batch(0), targets)
            if self.steps % update_target_steps == 0:
                self.target_model.set_weights(self.main_model.get_weights())

    def play(
        self,
        weights=None,
        video_dir=None,
        render=False,
        frame_dir=None,
        frame_delay=0.0,
    ):
        """
        Play and display a game.
        Args:
            weights: Path to trained weights, if not specified, the most recent
                model weights will be used.
            video_dir: Path to directory to save the resulting game video.
            render: If True, the game will be displayed.
            frame_dir: Path to directory to save game frames.
            frame_delay: Delay between rendered frames.
        Returns:
            None
        """
        env_in_use = self.envs(0)
        if weights:
            self.main_model.load_weights(weights)
        if video_dir:
            env_in_use = gym.wrappers.Monitor(env_in_use, video_dir)
        state = env_in_use.reset()
        steps = 0
        for dir_name in (video_dir, frame_dir):
            os.makedirs(dir_name or '.', exist_ok=True)
        while True:
            if render:
                env_in_use.render()
            if frame_dir:
                frame = env_in_use.render(mode='rgb_array')
                cv2.imwrite(os.path.join(frame_dir, f'{steps:05d}.jpg'), frame)
            action = self.get_action(state, False)
            state, reward, done, _ = env_in_use.step(action)
            if done:
                break
            steps += 1
            sleep(frame_delay)


if __name__ == '__main__':
    gym_envs = create_gym_env('PongNoFrameskip-v4', 3)
    agn = DQN(gym_envs)
    agn.fit(19)

utils.py

import random
from collections import deque

import cv2
import gym
import numpy as np


class AtariPreprocessor(gym.Wrapper):
    """
    gym wrapper for preprocessing atari frames.
    """

    def __init__(self, env, frame_skips=4, resize_shape=(84, 84)):
        """
        Initialize preprocessing settings.
        Args:
            env: gym environment that returns states as atari frames.
            frame_skips: Number of frame skips to use per environment step.
            resize_shape: (m, n) output frame size.
        """
        super(AtariPreprocessor, self).__init__(env)
        self.skips = frame_skips
        self.frame_shape = resize_shape
        self.observation_space.shape = (*resize_shape, 1)

    def process_frame(self, frame):
        """
        Resize and convert atari frame to grayscale.
        Args:
            frame: Image as numpy.ndarray

        Returns:
            Processed frame.
        """
        frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
        frame = cv2.resize(frame, self.frame_shape) / 255
        return np.expand_dims(frame, -1)

    def step(self, action: int):
        """
        Step respective to self.skips.
        Args:
            action: Action supported by self.env

        Returns:
            (state, reward, done, info)
        """
        total_reward = 0
        state, done, info = 3 * (None)
        for _ in range(self.skips):
            state, reward, done, info = self.env.step(action)
            total_reward += reward
            if done:
                break
        return self.process_frame(state), total_reward, done, info

    def reset(self, **kwargs):
        """
        Reset self.env
        Args:
            **kwargs: kwargs passed to self.env.reset()

        Returns:
            Processed atari frame.
        """
        observation = self.env.reset(**kwargs)
        return self.process_frame(observation)


class ReplayBuffer(deque):
    """
    Replay buffer that holds state transitions
    """

    def __init__(
        self,
        max_size,
        initial_size=None,
        n_steps=1,
        gamma=0.99,
        batch_size=32,
    ):
        """
        Initialize buffer settings.
        Args:
            max_size: Maximum transitions to store.
            initial_size: Maximum transitions to store before starting the training.
            n_steps: Steps separating start and end frames.
            gamma: Discount factor.
            batch_size: Size of the sampling method batch.
        """
        super(ReplayBuffer, self).__init__(maxlen=max_size)
        self.initial_size = initial_size or max_size
        self.n_steps = n_steps
        self.gamma = gamma
        self.temp_buffer = ()
        self.batch_size = batch_size

    def reset_temp_history(self):
        """
        Calculate start and end frames and clear temp buffer.
        Returns:
            state, action, reward, done, new_state
        """
        reward = 0
        for exp in self.temp_buffer(::-1):
            reward *= self.gamma
            reward += exp(2)
        state = self.temp_buffer(0)(0)
        action = self.temp_buffer(0)(1)
        done = self.temp_buffer(-1)(3)
        new_state = self.temp_buffer(-1)(-1)
        self.temp_buffer.clear()
        return state, action, reward, done, new_state

    def append(self, experience):
        """
        Append experience and auto-allocate to temp buffer / main buffer(self)
        Args:
            experience: state, action, reward, done, new_state

        Returns:
            None
        """
        if (self.temp_buffer and self.temp_buffer(-1)(3)) or len(
            self.temp_buffer
        ) == self.n_steps:
            adjusted_sample = self.reset_temp_history()
            super(ReplayBuffer, self).append(adjusted_sample)
        self.temp_buffer.append(experience)

    def get_sample(self):
        """
        Get a sample of the replay buffer.
        Returns:
            A batch of observations in the form of
            ((states), (actions), (rewards), (dones), (next states)),
        """
        memories = random.sample(self, self.batch_size)
        return (np.array(item) for item in zip(*memories))


def create_gym_env(env_name, n=1, preprocess=True, *args, **kwargs):
    """
    Create gym environment and initialize preprocessing settings.
    Args:
        env_name: Name of the environment to be passed to gym.make()
        n: Number of environments to create.
        preprocess: If True, AtariPreprocessor will be used.
        *args: args to be passed to AtariPreprocessor
        **kwargs: kwargs to be passed to AtariPreprocessor

    Returns:
        A list of gym environments.
    """
    envs = (gym.make(env_name) for _ in range(n))
    if preprocess:
        envs = (AtariPreprocessor(env, *args, **kwargs) for env in envs)
    return envs

I will top quality dofollow SEO backlinks high da DR authority white hat link building for you for $5

I will top quality dofollow SEO backlinks high da DR authority white hat link building for you

Are you looking for SEO service offering high authority SEO backlinks that are loved by Google and that are mostly dofollow so they pass large amount of link juice?You’re in luck!On this GIG I’m offering high quality links from domains with DA > 70 and PR 9 boosted by thousands of second tier links from forums and social networks.What exactly are you getting?☑ High DA links and links from authority websites – mix of articles backlinks, profiles and redirects / URL shortners☑ Domains like wordpress, livejournal, dailystrength etc.☑ Full report with login details☑ Added to a premium indexer☑ Tier 2 to power up tier 1☑ 100% Google safe☑ SEO Friendly technique – mix of no- and do-follow backlinks, keyword and brand links☑ NO spam☑ Added to PREMIUM indexerVery safe, white hat approach. No PBNs, no spam links, no black hat methods. These links are all white hat and safe to use.Remember that 1 high quality high PR link is worth more than 1000 spammy fake contextual backlinks!High quality backlinks and high DA guest post are the most important thing for better SEO.

.

c – remove kth last element from singly-linked list – Follow up

This code is a revised version of implementation which asked for an improvement. Original question is asked here:
remove kth last element from singly-linked list

credits to: Toby, Andreas, Arkadiusz

What has changed:

  1. remove length from xllist struct
  2. check if k is bigger than list length on the fly

Code:

#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <stdint.h>

typedef struct llnode
{
    int value;
    struct llnode *next;
} llnode;

typedef struct xllist
{
    llnode * head;
    llnode * tail;
} xllist;

bool create_node(xllist *list, int value)
{
    llnode *node = malloc(sizeof *node);
    if(!node)
    {
        return false;
    }

    node->value = value;
    node->next = NULL;
    if(!list->head)
    {
        list->head = node;
        list->tail = node;
    }
    list->tail->next = node;
    list->tail = node;
    return true;
}

bool del_element_from_last(xllist *llist, int k)
{
    //window with 2 pointers, length of k
    //prev is the prev node to the window
    llnode *prev;
    llnode *last;
    int len;  //list length

    if(llist->head)
    {
        last = llist->head;
        prev = llist->head;
        len = 1;
    }

    for(; last; last=last->next)
    {
        len++;
        if(len > k+1)
            prev = prev->next;
    }

    if(len < k) //len is smaller than k
    {
        return false;
    }

    if(len == k)  //means del 1st element from the list
    {
        llist->head = llist->head->next;
    }

    //remove first node of the window
    printf("deleted element:%d n", prev->next->value);
    prev->next = prev->next->next;
    return true;
}

int main(void)
{
    xllist llist = {NULL, NULL};

    for(int i=0; i<100; i++)
    {
        if(!create_node(&llist, 100+i))
            printf("create failn");
    }

    del_element_from_last(&llist, 15);
}

Do follow backlinks DA 90+ Verified High Quality Links for $8

Do follow backlinks DA 90+ Verified High Quality Links

18 High-Quality DA 90+ High PR Quality Dofollow Backlinks

DA 90+ Dofollow Backlinks are the best for google indexing and Ranking.

What we guarantee?

  • High quality Dofollow Backlinks
  • High PR
  • DA 90+
  • 100% Google safe
  • 100% Manual work
  • Help indexing and Ranking
  • Fat Delivery

FAQ
1. Do accept all Niches?
Yes, Except Adult, Gambling and Casino

2. Do you assure fast delivery?
Yeah!Do our best to assure fast delivery

3.Can you assure you provide DA 90+ Backlinks?
Yes. No worries. we provide DA 90+ Quality backlinks

Feel free to DM if you have any questions.

Have a Nice day!

.

unity – How can I build a follow/chase script. A gameobject that will follow and if needed will chase the player?

I have a GameObject in this his name is NAVI.
NAVI have some childs with animators give it some kind of living effects.

NAVI should be all the time a child in the player rig childs giving the effect the feeling that the player is holding NAVI with his hand.

From time to time in the game Navi by him self will leave the player hand moving to some other places in the world to do stuff. For example NAVI will move to some another object in the world do something there and after he finish his stuff NAVI should move back to the player hand.

While in all this time the player keep doing his own stuff in the game.
The player should “forget” the navi while the navi is doing something an navi should take care reaching the player and become child again in the player hand.

Now the player have a Rig with many childs and the player also have animator.
For example the player hand is moving up down changing it’s position when the player is walking.

This is a screenshot of the player settings : The player have this components : Animator, Rigidibody, Capsule collider, Third Person User Control, Third Person Character. The player is moving by the keys WSAD in constant speed that I set in the inspector in the Move Speed Multiplier field.

Player settings

In the player Rig in the child name rig_f_middle.03.R I added to a child of a empty GameObject name it Navi Parent and Navi Parent is set to position 0,0,0 and rotation 0,0,0

Navi Parent

If I’m dragging now the NAVI to be child of the Navi Parent in the editor and changing the NAVI position and rotation to 0,0,0 it will looks like the player is holding NAVI :

Navi in players hand

This is should be the default when Navi is in the player hand.
And sometimes Navi will leave the player will move to another place and do some stuff a mission a quest the player don’t see what navi is doing the player can wait or to move away do something else and navi will find the player and will reach the player.

  • The navi should move to a target/s to make quests/missions smooth slowly to leave nice the player hand increase his speed to the target when getting close to the target slow down and reach the target doing something.

  • The navi when he finished his stuff in the target/s he should again increase his speed slowly to a constant speed and try to find the player hand, move to the player hand and smooth slowly become the player hand child again like before.

  • The game should start when navi is in the player hand and then navi should start doing quests missions on his own.

This script if I’m attaching it to the NAVI and set the targetToFollow the Navi Parent will work nice. I put navi some where far in the world when running the game is reachTarget is true the navi will reach the player and will be come smoothly into the player hand.

I’m not sure about the speeds if it’s right and I don’t think I need for now the follow part only the reaching part and making the navi child part.

I’m also not sure how to make the part that if the game start when navi is in the player hand or when the navi return to the player hand how to send the navi to other quests(objects in the world) and back to the hand.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using UnityEngine;
using UnityEngine.UI;
using UnityStandardAssets.Characters.ThirdPerson;

public class Follow : MonoBehaviour
{
public Transform targetToFollow;
public Transform missionTarget;
public GameObject naviParent;
public ThirdPersonCharacter thirdPersonCharacter;
public Text textDistance;
public Text textSpeed;
public float lookAtRotationSpeed;
public float moveSpeed;
public float followRadius = 1.5f;
public float fastRadius = 5f;
public float speedBoost = 0.5f;
public bool follow = false;
public bool reachTarget = false;

private bool isNaviChild = false;
private bool changeToOrigin = false;
private Vector3 lTargetDir;
private float originSpeed;
private float originFollowRadius;

void Start()
{
    originSpeed = moveSpeed;
    originFollowRadius = followRadius;

    if (reachTarget)
    {
        moveSpeed = thirdPersonCharacter.m_MoveSpeedMultiplier * 5;
        followRadius = 0.1f;
    }
}

// Update is called once per frame
void FixedUpdate()
{
    if (follow)
    {
        if (reachTarget)
        {
            changeToOrigin = true;
            moveSpeed = thirdPersonCharacter.m_MoveSpeedMultiplier * 5;
            followRadius = 0.1f;
        }
        else if (changeToOrigin && reachTarget == false)
        {
            moveSpeed = originSpeed;
            followRadius = originFollowRadius;

            changeToOrigin = false;
        }

        lTargetDir = targetToFollow.position - transform.position;
        lTargetDir.y = 0.0f;

        Turn();

        float ms = moveSpeed;
        var distance = Vector3.Distance(transform.position, targetToFollow.position);
        // Compute a position no further than followRadius away from our target.
        Vector3 fromTarget = Vector3.ClampMagnitude(
            -lTargetDir, followRadius);
        Vector3 stopPoint = targetToFollow.position + fromTarget;

        // Compute a speed that's faster when far away and slower when close.
        float speedBlend = Mathf.Clamp01((distance - followRadius) / (fastRadius - followRadius));

        ms = moveSpeed + speedBlend * speedBoost;

        // Move as far as we can at our speed ms to reach the stopPoint, without overshooting.
        transform.position = Vector3.MoveTowards(transform.position,
            stopPoint, Time.deltaTime * ms);

        var dist = Vector3.Distance(transform.position, targetToFollow.position);
        if (dist < 0.1f && isNaviChild == false && reachTarget)
        {
            transform.parent = targetToFollow;
            transform.localPosition = new Vector3(0, 0, 0);
            transform.localRotation = Quaternion.identity;
            transform.localScale = new Vector3(0.001f, 0.001f, 0.001f);

            isNaviChild = true;
        }

        originSpeed = moveSpeed;
        originFollowRadius = followRadius;
    }
}

private void Turn()
{
    transform.rotation = Quaternion.RotateTowards(transform.rotation,
            Quaternion.LookRotation(lTargetDir), Time.time * lookAtRotationSpeed);
}

private void Mission()
{

}

}

The main logic and goal is to make navi some kind of helper for the player that make some other stuff and always should be back and be in the player hand.

selenium – why we should follow robots.txt file

Can anyone tell me exactly what type of circumstances could our company face if we do not follow the robots.txt file, because we are crawling following social media’s?

  • Facebook
  • Linkedin
  • Instagram
  • Twitter
  • Reddit
  • Tumblr
  • Youtube

Our Scenario: currently, we are not following any social media robots.txt file and fetching data from all major social media platforms using selenium, scrappy, and other technologies and dumping into our Database then do some analysis on it and show it on our dashboard for our clients.

Note: Our company is registered in the Netherlands