Search Results

Search found 874 results on 35 pages for 'simulate'.

Page 19/35 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How do I work around sudo 'segmentation fault' on basic bash commands?

    - by sage
    I am sure the answers are out there, but alas there are too many answers (here and elsewhere) to other questions stopping me from finding them. I just encountered something substantially similar to what is described at the closed SO question, sudo : “segmentation fault” Ubuntu maverick [closed]. My team is using Ubuntu 11.04 on VMWare Workstation 8.0.4. We are doing development using c++, Xenomai, Qt, and Qt Creator. When we simulate our application on the virtual machine, we currently need to launch Qt Creator with sudo. My colleague mentioned today that he has been having issues where his workstation locks up and he needs to restart and that occasionally he has the issue that all sudo bash commands return "segmentation fault". I just ran our application in simulation mode. I was running Qt Creator under sudo and Qt Creator received the signal abort (if I recall). Afterward, every command executed with sudo from sudo qtcreator to sudo ls resulted in the message Segmentation fault. I clicked on the power widget to see if I could log out, but the system shut down straightaway without prompting. My understanding is that we run sudo because of a permissions issue with Xenomai and the VM as currently configured, but my colleague has a workaround for this. I expect that not running Qt Creator under sudo -- something that has always made me nervous -- will help contain this issue, but I find it troubling that this could happen and manifest as it does. Does anyone know what is happening? Any recommendations on how to work around this issue? This is happening often to I am trying tolobby for VM changes to be able to run the process without sudo.

    Read the article

  • Monitor network connectivity in WP7 apps

    - by Daniel Moth
    Most interesting Windows Phone apps rely on some network service for their functionality. So at some point in your app you may need to know programmatically if there is network connection available or not. For example, the Translator by Moth app relies on the Bing Translation service for translations. When a request for translation (text or voice) is made, the network call may fail. The failure reason is not evident from any of the return results, so I check the connection to see if it is present. Dependent on that, a different message is shown to the user. Before the translation phase is even reached, at the app start up time the Bing service is queried for its list of  languages; in that case I don't want to show the user a message and instead want to be notified when the network is available in order to send the query out again. So for those two requirements (which I imagine are common in other apps) I wrote a simple wrapper MyNetwork static class to the framework APIs: Call once MyNetwork.MonitorNetworkAvailability() method so it monitors the network change At any time check the MyNetwork.IsConnected property to check for network presence Subscribe to its MyNetwork.ConnectionEstablished event Optionally, during debugging use its MyNetwork.ChangeStatus method to simulate a change in network status As usual, there may be better ways to achieve this, but this class works perfectly for my scenarios. You can download the code here: MyNetworks.cs. Comments about this post welcome at the original blog.

    Read the article

  • How to restore curropted installation?

    - by nightweels
    I have a laptop with dead battery that when there is no current connected to it turns critical so fast and in the energy management in ubuntu, when the battery is critical, there are 2 options: shutdown and hibernate witch is in grey (unclickable), so I have no choice but to chose immediate shutdown, there is no standby even if it is an option in the screen behavior. An immediate shutdown (and I mean by immediate the one that we use when we ended using the computer) happened while I was installing a program called quickly, so after the power was restored, I tried to reinstall the program then I get this translated message: An untreatable error occurred: It seems there is a software error in aptdaemon, the program that lets you install and remove software and any other task related to package management. details: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 968, in simulate trans.unauthenticated = self._simulate_helper(trans) File "/usr/lib/python2.7/dist-packages/aptdaemon/worker.py", line 1092, in _simulate_helper return depends, self._cache.required_download, \ File "/usr/lib/python2.7/dist-packages/apt/cache.py", line 235, in required_download pm.get_archives(fetcher, self._list, self._records) SystemError: E:I wasn't able to locate a file for the libpng12-dev package. This might mean you need to manually fix this package.

    Read the article

  • Client side latency when using prediction

    - by Tips48
    I've implemented Client-Side prediction into my game, where when input is received by the client, it first sends it to the server and then acts upon it just as the server will, to reduce the appearance of lag. The problem is, the server is authoritative, so when the server sends back the position of the Entity to the client, it undo's the effect of the interpolation and creates a rubber-banding effect. For example: Client sends input to server - Client reacts on input - Server receives and reacts on input - Server sends back response - Client reaction is undone due to latency between server and client To solve this, I've decided to store the game state and input every tick in the client, and then when I receive a packet from the server, get the game state from when the packet was sent and simulate the game up to the current point. My questions: Won't this cause lag? If I'm receiving 20/30 EntityPositionPackets a second, that means I have to run 20-30 simulations of the game state. How do I sync the client and server tick? Currently, I'm sending the milli-second the packet was sent by the server, but I think it's adding too much complexity instead of just sending the tick. The problem with converting it to sending the tick is that I have no guarantee that the client and server are ticking at the same rate, for example if the client is an old-end PC.

    Read the article

  • JavaScript Class Patterns Revisited: Endgame

    - by Liam McLennan
    I recently described some of the patterns used to simulate classes (types) in JavaScript. But I missed the best pattern of them all. I described a pattern I called constructor function with a prototype that looks like this: function Person(name, age) { this.name = name; this.age = age; } Person.prototype = { toString: function() { return this.name + " is " + this.age + " years old."; } }; var john = new Person("John Galt", 50); console.log(john.toString()); and I mentioned that the problem with this pattern is that it does not provide any encapsulation, that is, it does not allow private variables. Jan Van Ryswyck recently posted the solution, obvious in hindsight, of wrapping the constructor function in another function, thereby allowing private variables through closure. The above example becomes: var Person = (function() { // private variables go here var name,age; function constructor(n, a) { name = n; age = a; } constructor.prototype = { toString: function() { return name + " is " + age + " years old."; } }; return constructor; })(); var john = new Person("John Galt", 50); console.log(john.toString()); Now we have prototypal inheritance and encapsulation. The important thing to understand is that the constructor, and the toString function both have access to the name and age private variables because they are in an outer scope and they become part of the closure.

    Read the article

  • [Kubuntu 14.04][Eclipse] (ADT) crashes at button OK from Project properties

    - by nouseforname
    Since i upgraded to kubuntu 14.04, my Eclipse crashes at different situations. Mostly i can "simulate" it when going to project properties and press ok. Then it always crashes. My system: DISTRIB_ID=Ubuntu DISTRIB_RELEASE=14.04 DISTRIB_CODENAME=trusty DISTRIB_DESCRIPTION="Ubuntu 14.04.1 LTS" My Java: java version "1.8.0_05" Java(TM) SE Runtime Environment (build 1.8.0_05-b13) Java HotSpot(TM) 64-Bit Server VM (build 25.5-b02, mixed mode) My ADT Version: Android Development Toolkit Version: 23.0.0.1245622 I already tried to add this in adt-bundle-linux-x86_64/eclipse/configuration/configuration.ini org.eclipse.swt.browser.DefaultType=mozilla -Dorg.eclipse.swt.browser.DefaultType=mozilla Error: # # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007fe049eb1718, pid=5964, tid=140601811232512 # # JRE version: Java(TM) SE Runtime Environment (8.0_05-b13) (build 1.8.0_05-b13) # Java VM: Java HotSpot(TM) 64-Bit Server VM (25.5-b02 mixed mode linux-amd64 compressed oops) # Problematic frame: # C [libgobject-2.0.so.0+0x19718] g_object_get_qdata+0x18 # # Core dump written. Default location: /home/maddin/core or core.5964 # # An error report file with more information is saved as: # /home/maddin/hs_err_pid5964.log Compiled method (nm) 28866 4166 n 0 org.eclipse.swt.internal.gtk.OS::_g_object_get_qdata (native) total in heap [0x00007fe051da6790,0x00007fe051da6af0] = 864 relocation [0x00007fe051da68b0,0x00007fe051da68f8] = 72 main code [0x00007fe051da6900,0x00007fe051da6ae8] = 488 oops [0x00007fe051da6ae8,0x00007fe051da6af0] = 8 # # If you would like to submit a bug report, please visit: # http://bugreport.sun.com/bugreport/crash.jsp # The crash happened outside the Java Virtual Machine in native code. # See problematic frame for where to report the bug. # Now, as soon as i change SystemSettings - Application Apperance - GTK - GTKn-Design to something else but "oxygen-gtk" this crash doesn't happen anymore. But the application appearance also is ugly. Beside that i get a lot of errors/warnings like that: (SWT:6148): GLib-GObject-CRITICAL **: g_closure_add_invalidate_notifier: assertion 'closure->n_inotifiers < CLOSURE_MAX_N_INOTIFIERS' failed or other GTK warnings from the particular design, not having theme-engine. Which actually doesn't cause any crahs it seems so far. So i have 3 options: accept crashes accept warnings (maybe the best choice) accept ugly design What can i do to solve this issue without changing the design settings?

    Read the article

  • Netcat I/O enhancements

    - by user13277689
    When Netcat integrated into OpenSolaris it was already clear that there will be couple of enhancements needed. The biggest set of the changes made after Solaris 11 Express was released brings various I/O enhancements to netcat shipped with Solaris 11. Also, since Solaris 11, the netcat package is installed by default in all distribution forms (live CD, text install, ...). Now, let's take a look at the new functionality: /usr/bin/netcat alternative program name (symlink) -b bufsize I/O buffer size -E use exclusive bind for the listening socket -e program program to execute -F no network close upon EOF on stdin -i timeout extension of timeout specification -L timeout linger on close timeout -l -p port addr previously not allowed usage -m byte_count Quit after receiving byte_count bytes -N file pattern for UDP scanning -I bufsize size of input socket buffer -O bufsize size of output socket buffer -R redir_spec port redirection addr/port[/{tcp,udp}] syntax of redir_spec -Z bypass zone boundaries -q timeout timeout after EOF on stdin Obviously, the Swiss army knife of networking tools just got a bit thicker. While by themselves the options are pretty self explanatory, their combination together with other options, context of use or boundary values of option arguments make it possible to construct small but powerful tools. For example: the port redirector allows to convert TCP stream to UDP datagrams. the buffer size specification makes it possible to send one byte TCP segments or to produce IP fragments easily. the socket linger option can be used to produce TCP RST segments by setting the timeout to 0 execute option makes it possible to simulate TCP/UDP servers or clients with shell/python/Perl/whatever script etc. If you find some other helpful ways use please share via comments. Manual page nc(1) contains more details, along with examples on how to use some of these new options.

    Read the article

  • Why to avoid SELECT * from tables in your Views

    - by Jeff Smith
    -- clean up any messes left over from before: if OBJECT_ID('AllTeams') is not null  drop view AllTeams go if OBJECT_ID('Teams') is not null  drop table Teams go -- sample table: create table Teams (  id int primary key,  City varchar(20),  TeamName varchar(20) ) go -- sample data: insert into Teams (id, City, TeamName ) select 1,'Boston','Red Sox' union all select 2,'New York','Yankees' go create view AllTeams as  select * from Teams go select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Now, add a new column to the Teams table: alter table Teams add League varchar(10) go -- put some data in there: update Teams set League='AL' -- run it again select * from AllTeams --Results: -- --id          City                 TeamName ------------- -------------------- -------------------- --1           Boston               Red Sox --2           New York             Yankees -- Notice that League is not displayed! -- Here's an even worse scenario, when the table gets altered in ways beyond adding columns: drop table Teams go -- recreate table putting the League column before the City: -- (i.e., simulate re-ordering and/or inserting a column) create table Teams (  id int primary key,  League varchar(10),  City varchar(20),  TeamName varchar(20) ) go -- put in some data: insert into Teams (id,League,City,TeamName) select 1,'AL','Boston','Red Sox' union all select 2,'AL','New York','Yankees' -- Now, Select again for our view: select * from AllTeams --Results: -- --id          City       TeamName ------------- ---------- -------------------- --1           AL         Boston --2           AL         New York -- The column labeled "City" in the View is actually the League, and the column labelled TeamName is actually the City! go -- clean up: drop view AllTeams drop table Teams

    Read the article

  • Doubling the DPI with a shader?

    - by Mathias Lykkegaard Lorenzen
    I'm developing a game where the map is generated with Perlin Noise, but on the CPU. I am generating some perlin noise onto a texture with a small size, and then I stretch it out to the whole screen to simulate a map. The reason for the CPU generating the noise is that I want it to look the same on all devices. Now, here's the end-result. Please ignore the bullets and the explosion on the picture. What matters is the background (the black/gray pixels) and the ground (the brown-ish pixels). They are rendered to the same texture through perlin noise. However, this doesn't look very pretty. So I was wondering if it would be possible to double the amount of pixels using a shader, and rounding edges at the same time? In other words, improve the DPI. I'm using SharpDX with DirectX 11, through its toolkit feature. But any help that'll lead me in the right direction (for instance through HLSL) would be a great help. Thanks in advance.

    Read the article

  • ArchBeat Link-o-Rama for 11/29/2011

    - by Bob Rhubart
    Webcast: Introducing Oracle WebLogic Server 12c: Developer Deep Dive December 1, 2011 11am - 12pm PT / 2pm - 3pm ET. Learn how Oracle WebLogic Server 12c enables rapid development of modern, lightweight Java EE 6 applications. Discover how you can leverage the latest development technologies, tools and standards when deploying to Oracle WebLogic Server across both conventional and Cloud environments. Web Services in BI Publisher 11g | Robin Moffatt BI Publisher 11g comes with a shiny set of new Web Services, superseding those that were in 10g. Robin Moffatt's article discusses some of the uses, and ways to implement them. Stanford expands free, online information technology course offerings | ZDNet Joe McKendrick reports on new Stanford online courses set to start in January 2012. Courses include Software as a Service and Computer Science 101. The federal government's secret 1966 cloud computing plan | ZDNet "Even as far back as 45 years ago, the US federal government struggled to consolidate and become more service-oriented across its agency silos," says McKendrick. SOA Made Simple; Architects in AZ; Introduction to Cloud Migration This week on the Oracle Technology Network Architect Home Page. New release of S-ASH v.2.3 | Marcin Przepiorowski A short post from Marcin Przepiorowski on the new version of Oracle Simulate ASH. Architecture all day. Oracle Technology Network Architect Day - Phoenix, AZ Spend the day with your peers learning from Oracle experts on Cloud Computing, Engineered Systems, and more. Wednesday, December 14, 2011. 8:30am to 5:00pm. Registration is free, but seating is limited.

    Read the article

  • Happy Day! VS2010 SP1, Project Server Integration, Load Test Feature Pack

    - by Aaron Kowall
    Microsoft released a PILE of Visual Studio goodness today: Visual Studio 2010 SP1(Including TFS SP1) Finally done with remembering which GDR packs, KB Patches, etc need to be installed with a new VS/TFS 2010 deployment.  Just grab the SP1.  It’s available today for MSDN Subscribers and March 10th for public download. TFS-Project Server Integration Feature Pack MSDN Subscribers got another little treat today with the TFS-Project Server integration feature pack.  We can now get project rollups and portfolio level management with Project Server yet still have the tight developer interaction with TFS.  Finally we can make the PMO happy without duplicate entry or MS Project gymnastics. Visual Studio Load Test Feature Pack This is a new benefit for Visual Studio 2010 Ultimate subscribers.  Previously there was a limit to Ultimate Load Testing of 250 virtual users. If you needed more, you had to buy virtual user license packs.  No more.  Now your Visual Studio Ultimate license allows you to simulate as many virtual users as you need!!  This is HUGE in improving adoption of regular load testing for development projects. All the Details are available from Soma’s blog. Technorati Tags: VS2010,TFS,Load Test

    Read the article

  • An alternative to multiple inheritance when creating an abstraction layer?

    - by sebf
    In my project I am creating an abstraction layer for some APIs. The purpose of the layer is to make multi-platform easier, and also to simplify the APIs to the feature set that I need while also providing some functionality, the implementation of which will be unique to each platform. At the moment, I have implemented it by defining and abstract class, which has methods which creates objects that implement interfaces. The abstract class and these interfaces define the capabilities of my abstraction layer. The implementation of these in my layer should of course be arbitrary from the POV view of my application, but I have done it, for my first API, by creating chains of subclasses which add more specific functionality as the features of the APIs they expose become less generic. An example would probably demonstrate this better: //The interface as seen by the application interface IGenericResource { byte[] GetSomeData(); } interface ISpecificResourceOne : IGenericResource { int SomePropertyOfResourceOne {get;} } interface ISpecificResourceTwo : IGenericResource { string SomePropertyOfResourceTwo {get;} } public abstract class MyLayer { ISpecificResourceOne CreateResourceOne(); ISpecificResourceTwo CreateResourceTwo(); void UseResourceOne(ISpecificResourceOne one); void UseResourceTwo(ISpecificResourceTwo two); } //The layer as created in my library public class LowLevelResource : IGenericResource { byte[] GetSomeData() {} } public class ResourceOne : LowLevelResource, ISpecificResourceOne { int SomePropertyOfResourceOne {get{}} } public class ResourceTwo : ResourceOne, ISpecificResourceTwo { string SomePropertyOfResourceTwo {get {}} } public partial class Implementation : MyLayer { override UseResourceOne(ISpecificResourceOne one) { DoStuff((ResourceOne)one); } } As can be seen, I am essentially trying to have two inheritance chains on the same object, but of course I can't do this so I simulate the second version with interfaces. The thing is though, I don't like using interfaces for this; it seems wrong, in my mind an interface defines a contract, any class that implements that interface should be able to be used where that interface is used but here that is clearly not the case because the interfaces are being used to allow an object from the layer to masquerade as something else, without the application needing to have access to its definition. What technique would allow me to define a comprehensive, intuitive collection of objects for an abstraction layer, while their implementation remains independent? (Language is C#)

    Read the article

  • Simplest way to render image over top of another with another image used as mask in OpenGL?

    - by Adam Naylor
    The effect I'm looking for is to have a single large background image that is always visible (at full alpha) and then show a second image (what I call a light map or specular map) that is partially shown over the top based on a third image (which is effectively a mask). The effect is similar to this effect except instead of simply darkening or lightening the background image using the third image it needs to mask the second without effecting the first at all. The third image is the only one that moves therefore hard baking the third images alpha into the second image isn't an option. If my explanation isn't clear I'll provide visual examples when I have more time. I'd prefer not to go down a shader route as I haven't taught myself this area yet so unless I have too I'd rather try to achieve this with simple alpha blending. Happy to use a shader approach. Cheers. Additional These third images are obviously light sources being cast onto the first image showing the specular information from the second image to simulate the light 'shining' off the objects in the first image. The solution I implement will need to allow two light sources to potentially overlap so my current thoughts are that the alpha values of the two images will need to be combined (Added?) to produce a final image which masks the second image? Don't worry about things like coloured lights. For this technique the lights are all considered white.

    Read the article

  • Data Structure for Small Number of Agents in a Relatively Big 2D World

    - by Seçkin Savasçi
    I'm working on a project where we will implement a kind of world simulation where there is a square 2D world. Agents live on this world and make decisions like moving or replicating themselves based on their neighbor cells(world=grid) and some extra parameters(which are not based on the state of the world). I'm looking for a data structure to implement such a project. My concerns are : I will implement this 3 times: sequential, using OpenMP, using MPI. So if I can use the same structure that will be quite good. The first thing comes up is keeping a 2D array for the world and storing agent references in it. And simulate the world for each time slice by checking every cell in each iteration and further processing if an agents is found in the cell. The downside is what if I have 1000x1000 world and only 5 agents in it. It will be an overkill for both sequential and parallel versions to check each cell and look for possible agents in them. I can use quadtree and store agents in it, but then how can I get the information about neighbor cells then? Please let me know if I should elaborate more.

    Read the article

  • How to do a multishot in xna?

    - by DeVonte
    I am trying to simulate a gun in which shoots multiple bullets at the same time(similar to a spread out shot). I am thinking I have to create another bullet array then do the same as I have below but in a different direction. Here is what I have so far: foreach (GameObject bullet in bullets) { // Find a bullet that isn't alive if (!bullet.alive) { //And set it to alive bullet.alive = true; if (flip == SpriteEffects.FlipHorizontally) //Facing right { float armCos = (float)Math.Cos(arm.rotation - MathHelper.PiOver2); float armSin = (float)Math.Sin(arm.rotation - MathHelper.PiOver2); // Set the initial position of our bullets at the end of our gun arm // 42 is obtained by taking the width of the Arm_Gun texture / 2 // and subtracting the width of the Bullet texture / 2. ((96/2)=(12/2)) bullet.position = new Vector2( arm.position.X + 42 * armCos, arm.position.Y + 42 * armSin); // And give it a velocity of the direction we're aiming. // Increae/decrease speed by changeing 15.0f bullet.Velocity = new Vector2( (float)Math.Cos(arm.rotation - MathHelper.PiOver4 + MathHelper.Pi + MathHelper.PiOver2), (float)Math.Sin(arm.rotation - MathHelper.PiOver4 + MathHelper.Pi + MathHelper.PiOver2)) * 15.0f; } else //Facing left { float armCos = (float)Math.Cos(arm.rotation + MathHelper.PiOver2); float armSin = (float)Math.Sin(arm.rotation + MathHelper.PiOver2); //Set the initial position of our bullet at the end of our gun arm //42 is obtained be taking the width of the Arm_Gun texture / 2 //and subtracting the width of the Bullet texture / 2. ((96/2)-(12/2)) bullet.position = new Vector2( arm.position.X - 42 * armCos, arm.position.Y - 42 * armSin); //And give it a velocity of the direction we're aiming. //Increase/decrease speed by changing 15.0f bullet.Velocity = new Vector2( -armCos, -armSin) * 15.0f; } return; }// End if }// End foreach

    Read the article

  • Are there design patterns or generalised approaches for particle simulations?

    - by romeovs
    I'm working on a project (for college) in C++. The goal is to write a program that can more or less simulate a beam of particles flying trough the LHC synchrotron. Not wanting to rush into things, me and my team are thinking about how to implement this and I was wondering if there are general design patterns that are used to solve this kind of problem. The general approach we came up with so far is the following: there is a World that holds all objects you can add objects to this world such as Particle, Dipole and Quadrupole time is cut up into discrete steps, and at each point in time, for each Particle the magnetic and electric forces that each object in the World generates are calculated and summed up (luckily electro-magnetism is linear). each Particle moves accordingly (using a simple estimation approach to solve the differential movement equations) save the Particle positions repeat This seems a good approach but, for instance, it is hard to take into account symmetries that might be present (such as the magnetic field of each Quadrupole) and is this thus suboptimal. To take into account such symmetries as that of the Quadrupole field, it would be much easier to (also) make space discrete and somehow store form of the Quadrupole field somewhere. (Since 2532 or so Quadrupoles are stored this should lead to a massive gain of performance, not having to recalculate each Quadrupole field) So, are there any design patterns? Is the World-approach feasible or is it old-fashioned, bad programming? What about symmetry, how is that generally taken into acount?

    Read the article

  • Software architecture for two similar classes which require different input parameters for the same method

    - by I Like to Code
    I am writing code to simulate a supply chain. The supply chain can be simulated in either an intermediate stocking or a cross-docking configuration. So, I wrote two simulator objects IstockSimulator and XdockSimulator. Since the two objects share certain behaviors (e.g. making shipments, demand arriving), I wrote an abstract simulator object AbstractSimulator which is a parent class of the two simulator objects. The abstract simulator object has a method runSimulation() which takes an input parameter of class SimulationParameters. Up till now, the simulation parameters only contains fields that are common to both simulator objects, such as randomSeed, simulationStartPeriod and simulationEndPeriod. However, I now want to include fields that are specific to the type of simulation that is being run, i.e. an IstockSimulationParameters class for an intermediate stocking simulation, and a XdockSimulationParameters class for a cross-docking simulation. My current idea is take the method runSimulation() out of the AbstractSimulator class, but to put a runSimulation(IstockSimulationParameters) method in the IstockSimulator class, and a runSimulation(XdockSimulationParameters) method in the IstockSimulator class. I am worried however, that this approach will lead to code duplication. What should I do?

    Read the article

  • What relational database system should I learn? [closed]

    - by acidzombie24
    At the moment i know sqlite (my favorite), mysql (its ok, i get annoyed) and i do not want to learn ms/t sql (it only allows one nullable row if the column is unique). I am thinking about learning a new database system. My requirements for it is Must allow multiple connections at once (read and write) All or data i choose must be ACID compliant Performance should be good. I have a 17gb table in one project. It should perform well on read and transactional writes. With mysql it took hours to restore it and there were no foreign keys on that specific table. It only finished in a workday because i found a suggestion to adjust a setting which i think was key-buffer. And it still took hours Unique columns that allow more then one row to be null. I shouldn't have to say it but dammit MS. Allows one to make ongoing backups. Something like 'binary logs'. Some relatively small amounts of data i can grab and apply it to my local db to have it in sync with the one on the server. Table joins. I rather not write a bunch of queries to simulate a join What I would like but is not required Foreign keys. This may be a requirement later Open sourced Fair tool support. So i can measure queries, easily backup/restore, etc .NET and C (or C++) interface. (I seen one that uses raw tcp with JSON which was okish) Good subquery support. Once i was working with an older version of mysql (i believe <5.1 but it could have been 5.1) and i had to write many queries to do one query because it couldn't do subqueries. Or maybe it couldnt do it efficiently and died bc of memory limitations with a huge dataset. What db system should i learn?

    Read the article

  • Using pkexec policy to run out of /opt/

    - by liberavia
    I still try to make it possible to run my app with root priveleges. Therefore I created two policies to run the application via pkexec (one for /usr/bin and one for /opt/extras... ) and added them to the setup.py: data_files=[('/usr/share/polkit-1/actions', ['data/com.ubuntu.pkexec.armorforge.policy']), ('/usr/share/polkit-1/actions', ['data/com.ubuntu.extras.pkexec.armorforge.policy']), ('/usr/bin/', ['data/armorforge-pkexec'])] ) additionally I added a startscript which uses pkexec for starting the application. It distinguishes between the two places and is used in the Exec-Statement of the desktopfile: #!/bin/sh if [ -f /opt/extras.ubuntu.com/armorforge/bin/armorforge ]; then pkexec "/opt/extras.ubuntu.com/armorforge/bin/armorforge" "$@" else pkexec `which armorforge` "$@" fi If I simply do a quickly package everything will work right. But if I package with extras option: quickly package --extras the Exec-statement will be exchanged. Even if I try to simulate the pkexec call via armorforge-pkexec It will aks for a password and then returns this: andre@andre-desktop:~/Entwicklung/Ubuntu/armorforge$ armorforge-pkexec (armorforge:10108): GLib-GIO-ERROR **: Settings schema 'org.gnome.desktop.interface' is not installed Trace/breakpoint trap (core dumped) So ok, I could not trick the opt-thing. How can I make sure, that my Application will run with root priveleges out of opt. I copied the way of using pkexec from synaptic. My application is for communicating with apparmor which currently has no dbus interface. Else I need to write into /etc/apparmor.d-folder. How should I deal with the opt-build which, as far as I understand, is required to submit my application to the ubuntu software center. Thanks for any hints and/or links :-)

    Read the article

  • Testing a codebase with sequential cohesion

    - by iveqy
    I've this really simple program written in C with ncurses that's basically a front-end to sqlite3. I would like to implement TDD to continue the development and have found a nice C unit framework for this. However I'm totally stuck on how to implement it. Take this case for example: A user types a letter 'l' that is captured by ncurses getch(), and then an sqlite3 query is run that for every row calls a callback function. This callback function prints stuff to the screen via ncurses. So the obvious way to fully test this is to simulate a keyboard and a terminal and make sure that the output is the expected. However this sounds too complicated. I was thinking about adding an abstraction layer between the database and the UI so that the callback function will populate a list of entries and that list will later be printed. In that case I would be able to check if that list contains the expected values. However, why would I struggle with a data structure and lists in my program when sqlite3 already does this? For example, if the user wants to see the list sorted in some other way, it would be expensive to throw away the list and repopulate it. I would need to sort the list, but why should I implement sorting when sqlite3 already has that? Using my orginal design I could just do an other query sorted differently. Previously I've only done TDD with command line applications, and there it's really easy to just compare the output with what I'm expected. An other way would be to add CLI interface to the program and wrap a test program around the CLI to test everything. (The way git.git does with it's test-framework). So the question is, how to add testing to a tightly integrated database/UI.

    Read the article

  • How Do Computers Work? [closed]

    - by Rob P.
    This is almost embarrassing ask...I have a degree in Computer Science (and a second one in progress). I've worked as a full-time .NET Developer for nearly five years. I generally seem competent at what I do. But I Don't Know How Computers Work! Please, bare with me for a second. A quick Google of 'How a Computer Works' will yield lots and lots of results, but I struggled to find one that really answered what I'm looking for. I realize this is a huge, huge question, so really, if you can just give me some keywords or some direction. I know there are components....the power supply, the motherboard, ram, CPU, etc...and I get the 'general idea' of what they do. But I really don't understand how you go from a line of code like Console.Readline() in .NET (or Java or C++) and have it actually do stuff. Sure, I'm vaguely aware of MSIL (in the case of .NET), and that some magic happens with the JIT compiler and it turns into native code (I think). I'm told Java is similar, and C++ cuts out the middle step. I've done some mainframe assembly, it was a few years back now. I remember there were some instructions and some CPU registers, and I wrote code....and then some magic happened....and my program would work (or crash). From what I understand, an 'Emulator' would simulate what happens when you call an instruction and it would update the CPU registers; but what makes those instructions work the way they do? Does this turn into an Electronics question and not a 'Computer' question? I'm guessing there isn't any practical reason for me to understand this, but I feel like I should be able to. (Yes, this is what happens when you spend a day with a small child. It takes them about 10 minutes and five iterations of asking 'Why?' for you to realize how much you don't know)

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

  • CUDA 4.1 Particle Update

    - by N0xus
    I'm using CUDA 4.1 to parse in the update of my Particle system that I've made with DirectX 10. So far, my update method for the particle systems is 1 line of code within a for loop that makes each particle fall down the y axis to simulate a waterfall: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); In my .cu class I've created a struct which I copied from my particle class and is as follows: struct ParticleType { float positionX, positionY, positionZ; float red, green, blue; float velocity; bool active; }; Then I have an UpdateParticle method in the .cu as well. This encompass the 3 main parameters my particles need to update themselves based off the initial line of code. : __global__ void UpdateParticle(float* position, float* velocity, float frameTime) { } This is my first CUDA program and I'm at a loss to what to do next. I've tried to simply put the particleList line in the UpdateParticle method, but then the particles don't fall down as they should. I believe it is because I am not calling something that I need to in the class where the particle fall code use to be. Could someone please tell me what it is I am missing to get it working as it should? If I am doing this completely wrong in general, the please inform me as well.

    Read the article

  • Mouse Speed in GLUT and OpenGL?

    - by CroCo
    I would like to simulate a point that moves in 2D. The input should be the speed of the mouse, so the new position will be computed as following new_position = old_position + delta_time*mouse_velocity As far as I know in GLUT there is no function to acquire the current speed of the mouse between each frame. What I've done so far to compute the delta_time as following void Display() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glColor3f(1.0f, 0.0f, 0.0f); static int delta_t, current_t, previous_t; current_t = glutGet(GLUT_ELAPSED_TIME); delta_t = current_t - previous_t; std::cout << delta_t << std::endl; previous_t = current_t; glutSwapBuffers(); } Where should I start from here? (Note: I have to get the speed of the mouse because I'm modeling a system) Edit: Based on the above code, delta_time fluctuates so much 34 19 2 20 1 20 0 16 1 1 10 21 0 13 1 19 34 0 13 0 6 1 14 Why does this happen?

    Read the article

  • Dirt compression from vehicle tires

    - by Mungoid
    So I kinda have this working but its not correct because it just averages, so I wanted to know if anyone here has any ideas. I'm trying to simulate loose dirt compression under the tires of a vehicle to reduce the potential bumpiness of 'chunky' terrain. Currently how I do this is that I have a bounding box shape around my tires, set a little lower so they intersect with the terrain. Each frame, I (currently) average all of the heights of each point in the terrain that are within the box bounds of that tire, and then set them all to that average. Clearly this won't work in most cases because, for example, if i'm on a hill, the terrain will deform way too much. One way I thought was to have a max and min amount the points could raise and lower but that still doesn't seem to work properly and sometimes looks more like steps than smooth dirt. I wanna say that there is probably a bit more to this that what i'm currently doing but I am not sure where to look. Could anyone here shed some light on this subject? Would I benefit any by maybe looking up some smoothing algorith or something similar?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >