Search Results

Search found 15621 results on 625 pages for 'creating'.

Page 15/625 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Creating a new DHCP lease databse

    - by Stee1bear
    I have Ubuntu Server set up as my DCHP server. We have had it install for a few year and our dchp.lease file contain over 4000 entries. I am wanting to clean it up and basically start a new lease file to get current list (@1500 entries). I have read the walk through on how to make a new lease file and get it started, which leads me to this question. Will the dhcp server try to give each unit new IP address or will it build the new database with the IP addresses that the the units report to it?

    Read the article

  • CloudSeeder: CLR Stored Procedures For Creating CPU Pressure

    - by Adam Machanic
    Sometimes, in the interest of testing various scenarios that your server might encounter, it's useful to be able to quickly simulate some condition or another. I/O, memory, CPU pressure, and so on. This latter one is something I've been playing with a lot recently. CPU pressure in SQL Server creates all sorts of interesting side-effects , such as exacerbating waits and making various other conditions much easier to reproduce. In order to make this simpler, I've created the attached CLR library. This...(read more)

    Read the article

  • Creating basic ACPI event makes the system unusably slow

    - by skerit
    I want to change a few settings on my laptop when I switch to battery power. I created a new event in /etc/acpi/events/cust-battery and it looks like this: event=battery action=/home/skerit/power.sh I put a simple command in the power.sh file: echo This is a test >> /home/skerit/powertest Now, when I tail this file it shows "This is a test" 4-5 times upon switching to battery power. However, the system becomes totally unstable. It slows down significantly. I can't change anything in the terminal. The terminal and certain parts of the screen (like the gnome system monitor applet) go blank from time to time. What can be the cause of that? It's a simple echo that gets executed a few times!

    Read the article

  • OpenGL directional light creating black spots

    - by AnonymousDeveloper
    I probably ought to start by saying that I suspect the problem is that one of my vectors is not in the correct "space", but I don't know for sure. I am having a strange problem with a directional light. When I move the camera away from (0.0, 0.0, 0.0) it creates tiny black spots that grow larger as the distance increases. I apologize ahead of time for the length of the code. Vertex shader: #version 410 core in vec3 vf_normal; in vec3 vf_bitangent; in vec3 vf_tangent; in vec2 vf_textureCoordinates; in vec3 vf_vertex; out vec3 tc_normal; out vec3 tc_bitangent; out vec3 tc_tangent; out vec2 tc_textureCoordinates; out vec3 tc_vertex; uniform mat3 vf_m_normal; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform float vf_te_inner; uniform float vf_te_outer; void main() { tc_normal = vf_normal; tc_bitangent = vf_bitangent; tc_tangent = vf_tangent; tc_textureCoordinates = vf_textureCoordinates; tc_vertex = vf_vertex; gl_Position = vf_m_mvp * vec4(vf_vertex, 1.0); } Tessellation Control shader: #version 410 core layout (vertices = 3) out; in vec3 tc_normal[]; in vec3 tc_bitangent[]; in vec3 tc_tangent[]; in vec2 tc_textureCoordinates[]; in vec3 tc_vertex[]; out vec3 te_normal[]; out vec3 te_bitangent[]; out vec3 te_tangent[]; out vec2 te_textureCoordinates[]; out vec3 te_vertex[]; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; #define ID gl_InvocationID float getTessLevelInner(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_inner - avgDistance), 1.0, vf_te_inner); } float getTessLevelOuter(float distance0, float distance1) { float avgDistance = (distance0 + distance1) / 2.0; return clamp((vf_te_outer - avgDistance), 1.0, vf_te_outer); } void main() { te_normal[gl_InvocationID] = tc_normal[gl_InvocationID]; te_bitangent[gl_InvocationID] = tc_bitangent[gl_InvocationID]; te_tangent[gl_InvocationID] = tc_tangent[gl_InvocationID]; te_textureCoordinates[gl_InvocationID] = tc_textureCoordinates[gl_InvocationID]; te_vertex[gl_InvocationID] = tc_vertex[gl_InvocationID]; float eyeToVertexDistance0 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[0], 1.0)).xyz); float eyeToVertexDistance1 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[1], 1.0)).xyz); float eyeToVertexDistance2 = distance(vec3(0.0), vec4(vf_m_view * vec4(tc_vertex[2], 1.0)).xyz); gl_TessLevelOuter[0] = getTessLevelOuter(eyeToVertexDistance1, eyeToVertexDistance2); gl_TessLevelOuter[1] = getTessLevelOuter(eyeToVertexDistance2, eyeToVertexDistance0); gl_TessLevelOuter[2] = getTessLevelOuter(eyeToVertexDistance0, eyeToVertexDistance1); gl_TessLevelInner[0] = getTessLevelInner(eyeToVertexDistance2, eyeToVertexDistance0); } Tessellation Evaluation shader: #version 410 core layout (triangles, equal_spacing, cw) in; in vec3 te_normal[]; in vec3 te_bitangent[]; in vec3 te_tangent[]; in vec2 te_textureCoordinates[]; in vec3 te_vertex[]; out vec3 g_normal; out vec3 g_bitangent; out vec4 g_patchDistance; out vec3 g_tangent; out vec2 g_textureCoordinates; out vec3 g_vertex; uniform float vf_te_inner; uniform float vf_te_outer; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_displace; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 interpolate2D(vec2 v0, vec2 v1, vec2 v2) { return vec2(gl_TessCoord.x) * v0 + vec2(gl_TessCoord.y) * v1 + vec2(gl_TessCoord.z) * v2; } vec3 interpolate3D(vec3 v0, vec3 v1, vec3 v2) { return vec3(gl_TessCoord.x) * v0 + vec3(gl_TessCoord.y) * v1 + vec3(gl_TessCoord.z) * v2; } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2*d*d); return d; } float getDisplacement(vec2 t0, vec2 t1, vec2 t2) { float displacement = 0.0; vec2 textureCoordinates = interpolate2D(t0, t1, t2); vec2 vector = ((t0 + t1 + t2) / 3.0); float sampleDistance = sqrt((vector.x * vector.x) + (vector.y * vector.y)); sampleDistance /= ((vf_te_inner + vf_te_outer) / 2.0); displacement += texture(vf_t_displace, textureCoordinates).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, -sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2(-sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, sampleDistance)).x; displacement += texture(vf_t_displace, textureCoordinates + vec2( sampleDistance, -sampleDistance)).x; return (displacement / 5.0); } void main() { g_normal = normalize(interpolate3D(te_normal[0], te_normal[1], te_normal[2])); g_bitangent = normalize(interpolate3D(te_bitangent[0], te_bitangent[1], te_bitangent[2])); g_patchDistance = vec4(gl_TessCoord, (1.0 - gl_TessCoord.y)); g_tangent = normalize(interpolate3D(te_tangent[0], te_tangent[1], te_tangent[2])); g_textureCoordinates = interpolate2D(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); g_vertex = interpolate3D(te_vertex[0], te_vertex[1], te_vertex[2]); float displacement = getDisplacement(te_textureCoordinates[0], te_textureCoordinates[1], te_textureCoordinates[2]); float d2 = min(min(min(g_patchDistance.x, g_patchDistance.y), g_patchDistance.z), g_patchDistance.w); d2 = amplify(d2, 50, -0.5); g_vertex += g_normal * displacement * 0.1 * d2; gl_Position = vf_m_mvp * vec4(g_vertex, 1.0); } Geometry shader: #version 410 core layout (triangles) in; layout (triangle_strip, max_vertices = 3) out; in vec3 g_normal[3]; in vec3 g_bitangent[3]; in vec4 g_patchDistance[3]; in vec3 g_tangent[3]; in vec2 g_textureCoordinates[3]; in vec3 g_vertex[3]; out vec3 f_tangent; out vec3 f_bitangent; out vec3 f_eyeDirection; out vec3 f_lightDirection; out vec3 f_normal; out vec4 f_patchDistance; out vec4 f_shadowCoordinates; out vec2 f_textureCoordinates; out vec3 f_vertex; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat3 vf_m_normal; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; void main() { int index = 0; while (index < 3) { vec3 vertexNormal_cameraspace = vf_m_normal * normalize(g_normal[index]); vec3 vertexTangent_cameraspace = vf_m_normal * normalize(f_tangent); vec3 vertexBitangent_cameraspace = vf_m_normal * normalize(f_bitangent); mat3 TBN = transpose(mat3( vertexTangent_cameraspace, vertexBitangent_cameraspace, vertexNormal_cameraspace )); vec3 eyeDirection = -(vf_m_view * vf_m_model * vec4(g_vertex[index], 1.0)).xyz; vec3 lightDirection = normalize(-(vf_m_view * vec4(vf_l_position, 1.0)).xyz); f_eyeDirection = TBN * eyeDirection; f_lightDirection = TBN * lightDirection; f_normal = normalize(g_normal[index]); f_patchDistance = g_patchDistance[index]; f_shadowCoordinates = vf_m_depthBias * vec4(g_vertex[index], 1.0); f_textureCoordinates = g_textureCoordinates[index]; f_vertex = (vf_m_model * vec4(g_vertex[index], 1.0)).xyz; gl_Position = gl_in[index].gl_Position; EmitVertex(); index ++; } EndPrimitive(); } Fragment shader: #version 410 core in vec3 f_bitangent; in vec3 f_eyeDirection; in vec3 f_lightDirection; in vec3 f_normal; in vec4 f_patchDistance; in vec4 f_shadowCoordinates; in vec3 f_tangent; in vec2 f_textureCoordinates; in vec3 f_vertex; out vec4 fragColor; uniform vec4 vf_l_color; uniform vec3 vf_l_position; uniform mat4 vf_m_depthBias; uniform mat4 vf_m_model; uniform mat4 vf_m_mvp; uniform mat4 vf_m_projection; uniform mat4 vf_m_view; uniform sampler2D vf_t_diffuse; uniform sampler2D vf_t_normal; uniform sampler2DShadow vf_t_shadow; uniform sampler2D vf_t_specular; vec2 poissonDisk[16] = vec2[]( vec2(-0.94201624, -0.39906216), vec2( 0.94558609, -0.76890725), vec2(-0.09418410, -0.92938870), vec2( 0.34495938, 0.29387760), vec2(-0.91588581, 0.45771432), vec2(-0.81544232, -0.87912464), vec2(-0.38277543, 0.27676845), vec2( 0.97484398, 0.75648379), vec2( 0.44323325, -0.97511554), vec2( 0.53742981, -0.47373420), vec2(-0.26496911, -0.41893023), vec2( 0.79197514, 0.19090188), vec2(-0.24188840, 0.99706507), vec2(-0.81409955, 0.91437590), vec2( 0.19984126, 0.78641367), vec2( 0.14383161, -0.14100790) ); float random(vec3 seed, int i) { vec4 seed4 = vec4(seed,i); float dot_product = dot(seed4, vec4(12.9898, 78.233, 45.164, 94.673)); return fract(sin(dot_product) * 43758.5453); } float amplify(float d, float scale, float offset) { d = scale * d + offset; d = clamp(d, 0, 1); d = 1 - exp2(-2.0 * d * d); return d; } void main() { vec3 lightColor = vf_l_color.xyz; float lightPower = vf_l_color.w; vec3 materialDiffuseColor = texture(vf_t_diffuse, f_textureCoordinates).xyz; vec3 materialAmbientColor = vec3(0.1, 0.1, 0.1) * materialDiffuseColor; vec3 materialSpecularColor = texture(vf_t_specular, f_textureCoordinates).xyz; vec3 n = normalize(texture(vf_t_normal, f_textureCoordinates).rgb * 2.0 - 1.0); vec3 l = normalize(f_lightDirection); float cosTheta = clamp(dot(n, l), 0.0, 1.0); vec3 E = normalize(f_eyeDirection); vec3 R = reflect(-l, n); float cosAlpha = clamp(dot(E, R), 0.0, 1.0); float visibility = 1.0; float bias = 0.005 * tan(acos(cosTheta)); bias = clamp(bias, 0.0, 0.01); for (int i = 0; i < 4; i ++) { float shading = (0.5 / 4.0); int index = i; visibility -= shading * (1.0 - texture(vf_t_shadow, vec3(f_shadowCoordinates.xy + poissonDisk[index] / 3000.0, (f_shadowCoordinates.z - bias) / f_shadowCoordinates.w))); }\n" fragColor.xyz = materialAmbientColor + visibility * materialDiffuseColor * lightColor * lightPower * cosTheta + visibility * materialSpecularColor * lightColor * lightPower * pow(cosAlpha, 5); fragColor.w = texture(vf_t_diffuse, f_textureCoordinates).w; } The following images should be enough to give you an idea of the problem. Before moving the camera: Moving the camera just a little. Moving it to the center of the scene.

    Read the article

  • C++ and SDL Trouble Creating a STL Vector of a Game Object

    - by Jackson Blades
    I am trying to create a Space Invaders clone using C++ and SDL. The problem I am having is in trying to create Waves of Enemies. I am trying to model this by making my Waves a vector of 8 Enemy objects. My Enemy constructor takes two arguments, an x and y offset. My Wave constructor also takes two arguments, an x and y offset. What I am trying to do is have my Wave constructor initialize a vector of Enemies, and have each enemy given a different x offset so that they are spaced out appropriately. Enemy::Enemy(int x, int y) { box.x = x; box.y = y; box.w = ENEMY_WIDTH; box.h = ENEMY_HEIGHT; xVel = ENEMY_WIDTH / 2; } Wave::Wave(int x, int y) { box.x = x; box.y = y; box.w = WAVE_WIDTH; box.y = WAVE_HEIGHT; xVel = (-1)*ENEMY_WIDTH; yVel = 0; std::vector<Enemy> enemyWave; for (int i = 0; i < enemyWave.size(); i++) { Enemy temp(box.x + ((ENEMY_WIDTH + 16) * i), box.y); enemyWave.push_back(temp); } } I guess what I am asking is if there is a cleaner, more elegant way to do this sort of initialization with vectors, or if this is right at all. Any help is greatly appreciated.

    Read the article

  • Creating a SQL Azure Database Should be Easier

    - by Ken Cox [MVP]
    Every time I try to create a database + tables + data for Windows Azure SQL I get errors.  One of them is 'Filegroup reference and partitioning scheme' is not supported in this version of SQL Server.' It’s partly due to my poor memory (since I’ve succeeded before) and partly due to the failure of tools that should be helping me. For example, when I want to create a script from an existing database on my local workstation, I use SQL Server Management Studio (currently v 11.0.2100.60).  I go to Tasks > Generate Scripts which brings up the nice Generate and Publish Scripts wizard. When I go into the Advanced button, under Script for Server Version, why don’t I see SQL Azure as an option by now? The tool should be sorting this out for me, right? Maybe this is available in SQL Server Data Tools? I haven’t got into that yet. Just merge the functionality with SSMS, please. Anyway, I pick an older version of SQL for the target and still need to tweak it for Azure. For example, I take out all the “[dbo].” stuff. Why is it put there by the wizard? I also have to get rid of "ON [PRIMARY]"  to deal with the error I noted at the top. Yes, there’s information on what a table needs to look like in SQL Azure but the tools should know this so I don’t have to mess with it.

    Read the article

  • What is appropriate for creating a booking system?

    - by Joe
    I need a booking system for a theoretical project website. It would be an in-house job (not outsourcing to a web service) but all google searches on the subject yield results for said web services. I'd want to be able to use the system as such: For each day, there is availability recorded and if available a user can book in using the website, which sets that date to unavailable. There are other complexities, but this is the basic system I am trying to achieve - what would allow me to implement something like this?

    Read the article

  • Creating a DrawableGameComponent

    - by Christian Frantz
    If I'm going to draw cubes effectively, I need to get rid of the numerous amounts of draw calls I have and what has been suggested is that I create a "mesh" of my cubes. I already have them being stored in a single vertex buffer, but the issue lies in my draw method where I am still looping through every cube in order to draw them. I thought this was necessary as each cube will have a set position, but it lowers the frame rate incredibly. What's the easiest way to go about this? I have a class CubeChunk that inherits Microsoft.Stuff.DrawableGameComponent, but I don't know what comes next. I suppose I could just use the chunk of cubes created in my cube class, but that would just keep me going in circles and drawing each cube individually. The goal here is to create a draw method that draws my chunk as a whole, and to not draw individual cubes as I've been doing.

    Read the article

  • Creating an Ubuntu live USB for use with Gparted

    - by Jeff
    I've install Ubuntu 12.04 on my Windows 7 Dell laptop. Recently I discovered that I'm running out of space on my Ubuntu partition, and I would like to enlarge it. Is it safe to resize partitions while they're in use e.g. when I'm logged into Ubuntu? If so, I've ran into this problem when I run GParted: It seems as if my hard drive is one big, NTFS partition, like the Ubuntu partition doesn't exist. Is it possible Ubuntu runs off the NTFS partition, sharing it with Windows? What should I do?

    Read the article

  • Creating a Reporting Services Histogram Chart for Statistical Distribution Analysis

    Typically transactional data is quite detailed and analyzing an entire dataset on a graph is not feasible. Generally such data is analyzed using some form of aggregation or frequency distribution. One of the specialized charts generally used in Reporting Services for statistical distribution is Histogram Charts. In this tip we look at how Histogram Charts can be used for statistical distribution analysis and how to create and configure this type of chart in SSRS.

    Read the article

  • Problem creating levels using inherited classes/polymorphism

    - by Adam
    I'm trying to write my level classes by having a base class that each level class inherits from...The base class uses pure virtual functions. My base class is only going to be used as a vector that'll have the inherited level classes pushed onto it...This is what my code looks like at the moment, I've tried various things and get the same result (segmentation fault). //level.h class Level { protected: Mix_Music *music; SDL_Surface *background; SDL_Surface *background2; vector<Enemy> enemy; bool loaded; int time; public: Level(); virtual ~Level(); int bgX, bgY; int bg2X, bg2Y; int width, height; virtual void load(); virtual void unload(); virtual void update(); virtual void draw(); }; //level.cpp Level::Level() { bgX = 0; bgY = 0; bg2X = 0; bg2Y = 0; width = 2048; height = 480; loaded = false; time = 0; } Level::~Level() { } //virtual functions are empty... I'm not sure exactly what I'm supposed to include in the inherited class structure, but this is what I have at the moment... //level1.h class Level1: public Level { public: Level1(); ~Level1(); void load(); void unload(); void update(); void draw(); }; //level1.cpp Level1::Level1() { } Level1::~Level1() { enemy.clear(); Mix_FreeMusic(music); SDL_FreeSurface(background); SDL_FreeSurface(background2); music = NULL; background = NULL; background2 = NULL; Mix_CloseAudio(); } void Level1::load() { music = Mix_LoadMUS("music/song1.xm"); background = loadImage("image/background.png"); background2 = loadImage("image/background2.png"); Mix_OpenAudio(48000, MIX_DEFAULT_FORMAT, 2, 4096); Mix_PlayMusic(music, -1); } void Level1::unload() { } //functions have level-specific code in them... Right now for testing purposes, I just have the main loop call Level1 level1; and use the functions, but when I run the game I get a segmentation fault. This is the first time I've tried writing inherited classes, so I know I'm doing something wrong, but I can't seem to figure out what exactly.

    Read the article

  • Creating shooting arrow class [on hold]

    - by I.Hristov
    OK I am trying to write an XNA game with one controllable by the player entity, while the rest are bots (enemy and friendly) wondering around and... shooting each other from range. Now the shooting I suppose should be done with a separate class Arrow (for example). The resulting object would be the arrow appearing on screen moving from shooting entity to target entity. When target is reached arrow is no longer active, probably removed from the list. I plan to make a class with fields: Vector2 shootingEntity; Vector2 targetEntity; float arrowSpeed; float arrowAttackSpeed; int damageDone; bool isActive; Then when enemy entities get closer than a int rangeToShoot (which each entity will have as a field/prop) I plan to make a list of arrows emerging from each entity and going to the closest opposite one. I wonder if that logic will enable me later to make possible many entities to be able to shoot independently at different enemy entities at the same time. I know the question is broad but it would be wise to ask if the foundations of the idea are correct.

    Read the article

  • Creating Database-Driven ASP.NET 3.5 Input and List Web Controls

    You might have read our tutorials on how to configure user input-based web controls in ASP.NET 3.5. This type of web control is used to gather user input from a web form. While those articles showed a basic way to configure these web controls this article will show you a database-driven method that is much more efficient when you have to make changes to lots of options presented by the controls.... Transportation Design - AutoCAD Civil 3D Design Road Projects 75% Faster with Automatic Documentation Updates!

    Read the article

  • Creating Ubuntu Browser App Frames

    - by user73006
    After watching the video i am inspired to create one browser but stuck at one place, could you please help me with this. Requirement = - Like you displayed in your Video i wan create Multiple Buttons in my Toolbar which will open Second ToolBar or Popup Window. - From that Pop Window i wanted to Select Specific Button Which will open My Required Browser. Question - - As displayed in your Video i create new BUtton and If i try to open new link using that it works but now i want to display tool bar or Popup window once any one click on that button, how can i do that.The Second Tool Bar Need to be Activated only after clicking on that button. Things i Tried - - As per my understanding i create Second Toolbar and on that tool bar i have created Button, now i wan know how do i link that tool bar with my Browser Toolbar button. - I tried that by passing Signal Property in Second Toolbar in Quickly but something is missing. MY Code class TvbrowserWindow(Window): gtype_name = "TvbrowserWindow" def finish_initializing(self, builder): # pylint: disable=E1002 """Set up the main window""" super(TvbrowserWindow, self).finish_initializing(builder) self.AboutDialog = AboutTvbrowserDialog self.PreferencesDialog = PreferencesTvbrowserDialog # Code for other initialization actions should be added here. self.refreshbutton=self.builder.get_object("refreshbutton") self.SONY=self.builder.get_object("SONY") self.urlentry=self.builder.get_object("urlentry") self.scrolledwindow1=self.builder.get_object("scrolledwindow1") self.webview = WebKit.WebView() self.scrolledwindow1.add(self.webview) self.webview.show() def on_refreshbutton_clicked(self, widget): print "refresh" def on_urlentry_activate(self, widget): url = widget.get_text() print url self.webview.open(url)

    Read the article

  • Resources for creating a turn-by-turn navigation system

    - by benwad
    I'm trying to create a kind of turn-by-turn satellite navigation system using the iOS SDK. I get the directions from the server and draw them on the map, then I keep getting location updates from the iPhone's GPS chip. Currently I start by finding the nearest turning point then, each time the user comes within a certain distance of the next turning point, a verbal cue is given and the turning point index is incremented. This is a delicate system and I'd like to make it more robust so I can tell when the user is going the wrong direction etc. Basically I'm looking for some literature about turn-by-turn navigation, in terms of tracking the user's progress and whether they're going the right direction. I'd have thought there's a lot of research out there but I can't seem to find anything apart from simple tutorials on how to use a given SDK or directions API. Can anyone direct me to a good run-through of the various techniques used in software such as TomTom or Google Maps Navigation?

    Read the article

  • creating a bootable USB stick on OS X

    - by Rob
    I'm trying to create a bootable USB stick on OS X using http://www.ubuntu.com/download/help/create-a-usb-stick-on-mac-osx When this finishes, I get a message in terminal saying "695+1 records in 695+1 records out 729067520 bytes transferred in 264.563662 secs (2755736 bytes/sec)" But a message pops up saying "The disk you inserted was not readable by this computer" Options are Initialize, Ignore or Eject. What am I doing wrong or omitting. (oh - complete novice)

    Read the article

  • How to make creating viewmodels at runtime less painfull

    - by Mr Happy
    I apologize for the long question, it reads a bit as a rant, but I promise it's not! I've summarized my question(s) below In the MVC world, things are straightforward. The Model has state, the View shows the Model, and the Controller does stuff to/with the Model (basically), a controller has no state. To do stuff the Controller has some dependencies on web services, repository, the lot. When you instantiate a controller you care about supplying those dependencies, nothing else. When you execute an action (method on Controller), you use those dependencies to retrieve or update the Model or calling some other domain service. If there's any context, say like some user wants to see the details of a particular item, you pass the Id of that item as parameter to the Action. Nowhere in the Controller is there any reference to any state. So far so good. Enter MVVM. I love WPF, I love data binding. I love frameworks that make data binding to ViewModels even easier (using Caliburn Micro a.t.m.). I feel things are less straightforward in this world though. Let's do the exercise again: the Model has state, the View shows the ViewModel, and the ViewModel does stuff to/with the Model (basically), a ViewModel does have state! (to clarify; maybe it delegates all the properties to one or more Models, but that means it must have a reference to the model one way or another, which is state in itself) To do stuff the ViewModel has some dependencies on web services, repository, the lot. When you instantiate a ViewModel you care about supplying those dependencies, but also the state. And this, ladies and gentlemen, annoys me to no end. Whenever you need to instantiate a ProductDetailsViewModel from the ProductSearchViewModel (from which you called the ProductSearchWebService which in turn returned IEnumerable<ProductDTO>, everybody still with me?), you can do one of these things: call new ProductDetailsViewModel(productDTO, _shoppingCartWebService /* dependcy */);, this is bad, imagine 3 more dependencies, this means the ProductSearchViewModel needs to take on those dependencies as well. Also changing the constructor is painfull. call _myInjectedProductDetailsViewModelFactory.Create().Initialize(productDTO);, the factory is just a Func, they are easily generated by most IoC frameworks. I think this is bad because Init methods are a leaky abstraction. You also can't use the readonly keyword for fields that are set in the Init method. I'm sure there are a few more reasons. call _myInjectedProductDetailsViewModelAbstractFactory.Create(productDTO); So... this is the pattern (abstract factory) that is usually recommended for this type of problem. I though it was genious since it satisfies my craving for static typing, until I actually started using it. The amount of boilerplate code is I think too much (you know, apart from the ridiculous variable names I get use). For each ViewModel that needs runtime parameters you'll get two extra files (factory interface and implementation), and you need to type the non-runtime dependencies like 4 extra times. And each time the dependencies change, you get to change it in the factory as well. It feels like I don't even use an DI container anymore. (I think Castle Windsor has some kind of solution for this [with it's own drawbacks, correct me if I'm wrong]). do something with anonymous types or dictionary. I like my static typing. So, yeah. Mixing state and behavior in this way creates a problem which don't exist at all in MVC. And I feel like there currently isn't a really adequate solution for this problem. Now I'd like to observe some things: People actually use MVVM. So they either don't care about all of the above, or they have some brilliant other solution. I haven't found an indepth example of MVVM with WPF. For example, the NDDD-sample project immensely helped me understand some DDD concepts. I'd really like it if someone could point me in the direction of something similar for MVVM/WPF. Maybe I'm doing MVVM all wrong and I should turn my design upside down. Maybe I shouldn't have this problem at all. Well I know other people have asked the same question so I think I'm not the only one. To summarize Am I correct to conclude that having the ViewModel being an integration point for both state and behavior is the reason for some difficulties with the MVVM pattern as a whole? Is using the abstract factory pattern the only/best way to instantiate a ViewModel in a statically typed way? Is there something like an in depth reference implementation available? Is having a lot of ViewModels with both state/behavior a design smell?

    Read the article

  • Script for Creating Multiple ZIP archives from Multiple Folders

    - by user39288
    I want to be able to right click multiple folders inside of a directory in nautilus, and be able to create seperate zip archives from those folders in that same directory. If possible it would also be great if it automatically deleted the old folders. So, if I have 30 folders, I want to select those using control-shift, then go to scripts and run the script, and just have those 30 folders compressed into seperate .zip archives, and have the old folders deleted (if possible). Anyone know how to accomplish this? I suck with terminal, and am looking for a script solution.

    Read the article

  • Creating an XML / Flash Slideshow with Captions

    This article guides about to create a Flash Slideshow using XML. Usually, when we create a slideshow in flash, all the images get integrated into the movie itself, doing this slashes the reusability of the movie file (swf) and updatability of the movie content. Here I created an information bridge b

    Read the article

  • Creating a NAS Box with an Existing System

    <B>Linux Magazine:</B> "Standalone Network Attached Storage (NAS) servers provide file level storage to heterogeneous clients, enabling shared storage. This article presents the basics of NAS units (NFS servers) and how you can create one from an existing system"

    Read the article

  • Best system for creating a 2d racing track

    - by tesselode
    I am working a 2D racing game and I'm trying to figure out what is the best way to define the track. At the very least, I need to be able to create a closed circuit with any amount of turns at any angle, and I need vehicles to collide with the edges of the track. I also want the following things to be true if possible (but they are optional): The code is simple and free of funky workarounds and extras I can define all of the parts of the track (such as turns) relative to the previous parts I can predict the exact position of the road at a certain point (that way I can easily and cleanly make closed circuits) Here are my options: Use a set of points. This is my current system. I have a set of turns and width changes that the track is supposed to make over time. I have a point which I transform according to these instructions, and I place a point every 5 steps or so, depending on how precise I want the track to be. These points make up the track. The main problem with this is the discrepancy between the collisions and the way the track is drawn. I won't get into too much detail, but the picture below shows what is happening (although it is exaggerated a bit). The blue lines are what is drawn, the red lines are what the vehicle collides with. I could work around this, but I'd rather avoid funky workaround code. Beizer curves. These seem cool, but my first impression of them is that they'll be a little daunting to learn and are probably too complicated for my needs. Some other kind of curve? I have heard of some other kinds of curves; maybe those are more applicable. Use Box2D or another physics engine. Instead of defining the center of the track, I could use a physics engine to define shapes that make up the road. The downside to this, however, is that I have to put in a little more work to place the checkpoints. Something completely different. Basically, what is the simplest system for generating a race track that would allow me to create closed circuits cleanly, handle collisions, and not have a ton of weird code?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >