Search Results

Search found 91 results on 4 pages for 'the quantum physicist'.

Page 2/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Team seeks collaboration for 2D action adventure RPG

    - by AlchemicTempest
    not entirely sure if it's appropriate to post this here, but I'll try: We are looking for all kinds of game dev interested people for our 2D sci-fi action adventure rpg "Quantum Nucleus" This is voluntary collaboration. We are seeking programmers(Java), artists, designers, audio people and writers So basically all kinds of people. Please watch our video, for further information: Video Link Thanks ! :D http://www.Alchemic-Tempest.com

    Read the article

  • HTB.init / tc behind NAT

    - by Ben K.
    I have an Ubuntu 10 box that I'm trying to set up as a bandwidth-shaping router. The machine has one WAN interface, eth0 and two LAN interfaces, eth1 and eth2. NAT is configured using MASQUERADE as described at InternetConnectionSharing. I'm mostly concerned with shaping outbound traffic from the LAN interfaces -- in the end, I'd like to end up with a hard 768Kbps limit per-LAN-interface (rather than a limit on eth0 pooled across all interfaces). I installed HTB.init, and riffing on the examples, tried to set this up on eth1 by putting three files into /etc/sysconfig/htb: /etc/sysconfig/htb/eth1 DEFAULT=30 R2Q=100 /etc/sysconfig/htb/eth1-2.root RATE=768Kbps BURST=15k /etc/sysconfig/htb/eth1-2:30.dfl RATE=768Kbps CEIL=788Kbps BURST=15k LEAF=sfq I can /etc/init.d/htb start and /etc/init.d/htb stats and see information that /seems/ to suggest it's working...but when I try pulling a large file via the WAN interface the shaping clearly isn't in effect. Any suggestions? My guess is it has something to do with where the shaping falls in the NAT chain, but I really have no idea where to begin troubleshooting this. ---- Update: Here's my /etc/init.d/htb list output, it seems to make sense -- the default rate for eth1 is 768Kbps? ### eth0: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth0: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b ### eth0: filtering rules filter parent 1: protocol ip pref 100 u32 filter parent 1: protocol ip pref 100 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 100 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:30 match 00000000/00000000 at 12 match 00000000/00000000 at 16 ### eth1: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth1: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • Sound Waves Visualized with a Chladni Plate and Colored Sand [Video]

    - by Jason Fitzpatrick
    This eye catching demonstration combines a Chladni Plate, four piles of colored sand, and a rubber mallet to great effect–watch as the plate vibrates pattern after pattern into the sand. A Chladni Plate, named after physicist Ernst Chladni, is a steel plate that vibrates when rubbed with a rubber ball-style mallet. Different size balls create different frequencies and each frequency creates a different pattern in the sand placed atop the plate. Watch the video above to see how rubber balls, large and small, change the patterns. [via Neatorama] Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • LaTeX-like display programming environment

    - by Gage
    I used to be a hobbyist programmer, but now I'm also a fairly experienced physicist and find myself programming to solve certain problems quite a lot. In physics, we use variables with superscripts, subscripts, italics, underlines, etc etc. To bridge this gap to the computer we usually use LaTeX. Now, I generally use MATLAB for handling any data and such, and find it very irritating that I can't basically use LaTeX for variable names. Something as simple as sy has to be named either sigma_y or some descriptive name like peak_height_error. I don't necessarily want full on workable LaTeX in my code, but I do want to be able to use greek letters and super/sub-scripts at the very least. Does this exist?

    Read the article

  • Ever wonder why Earth spins?

    - by Gopinath
    Have you ever wonder why Earth spins on its axis and completes a revolution every day? Is there any force that keeps Earth spinning? Is that because of  Gravity or any Magnetic force? Check out this video to learn why Earth spins and the basics of physics behind the magic If you find that above video is in simple English and it’s not convincing physicist inside you, lets hear from a NASA scientist in the embedded video. A NASA scientist explains how Earth rotation has started, how fast it was billions of years ago and what caused it to slow down to 24 hours to complete a revolution   Thanks @pinaldev

    Read the article

  • Find angle for projectile to meet target in parabolic arc

    - by TheBroodian
    I'm making a thing that launches projectiles in 2D. Its projectiles are fired with a set initial velocity, and are only affected by gravity. Assuming that its target is within range, and that there aren't any obstacles, how would my thing find the appropriate angle at which to launch its projectile (in radians)? The equation for this is found here: Wikipedia: Angle Required to Hit Coordinate Sadly, I'm not a physicist (a.k.a. can't read smart people math) and am having a hard time reading its breakdown. If not only for the sake of anybody else that might read this other than myself, would anybody be kind enough to break the equation down into baby words please?

    Read the article

  • SQL Server IO handling mechanism can be severely affected by high CPU usage

    - by sqlworkshops
    Are you using SSD or SAN / NAS based storage solution and sporadically observe SQL Server experiencing high IO wait times or from time to time your DAS / HDD becomes very slow according to SQL Server statistics? Read on… I need your help to up vote my connect item – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage. Instead of taking few seconds, queries could take minutes/hours to complete when CPU is busy.In SQL Server when a query / request needs to read data that is not in data cache or when the request has to write to disk, like transaction log records, the request / task will queue up the IO operation and wait for it to complete (task in suspended state, this wait time is the resource wait time). When the IO operation is complete, the task will be queued to run on the CPU. If the CPU is busy executing other tasks, this task will wait (task in runnable state) until other tasks in the queue either complete or get suspended due to waits or exhaust their quantum of 4ms (this is the signal wait time, which along with resource wait time will increase the overall wait time). When the CPU becomes free, the task will finally be run on the CPU (task in running state).The signal wait time can be up to 4ms per runnable task, this is by design. So if a CPU has 5 runnable tasks in the queue, then this query after the resource becomes available might wait up to a maximum of 5 X 4ms = 20ms in the runnable state (normally less as other tasks might not use the full quantum).In case the CPU usage is high, let’s say many CPU intensive queries are running on the instance, there is a possibility that the IO operations that are completed at the Hardware and Operating System level are not yet processed by SQL Server, keeping the task in the resource wait state for longer than necessary. In case of an SSD, the IO operation might even complete in less than a millisecond, but it might take SQL Server 100s of milliseconds, for instance, to process the completed IO operation. For example, let’s say you have a user inserting 500 rows in individual transactions. When the transaction log is on an SSD or battery backed up controller that has write cache enabled, all of these inserts will complete in 100 to 200ms. With a CPU intensive parallel query executing across all CPU cores, the same inserts might take minutes to complete. WRITELOG wait time will be very high in this case (both under sys.dm_io_virtual_file_stats and sys.dm_os_wait_stats). In addition you will notice a large number of WAITELOG waits since log records are written by LOG WRITER and hence very high signal_wait_time_ms leading to more query delays. However, Performance Monitor Counter, PhysicalDisk, Avg. Disk sec/Write will report very low latency times.Such delayed IO handling also occurs to read operations with artificially very high PAGEIOLATCH_SH wait time (with number of PAGEIOLATCH_SH waits remaining the same). This problem will manifest more and more as customers start using SSD based storage for SQL Server, since they drive the CPU usage to the limits with faster IOs. We have a few workarounds for specific scenarios, but we think Microsoft should resolve this issue at the product level. We have a connect item open – https://connect.microsoft.com/SQLServer/feedback/details/744650/sql-server-io-handling-mechanism-can-be-severely-affected-by-high-cpu-usage - (with example scripts) to reproduce this behavior, please up vote the item so the issue will be addressed by the SQL Server product team soon.Thanks for your help and best regards,Ramesh MeyyappanHome: www.sqlworkshops.comLinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • System Error when running PyQt4's loadUi()

    - by user633804
    Hello, I'm pretty new to Qt, Python and their combinations. I'm currently writing a QGIS plugin in python (I used QtCreator 2.1 (Qt Designer 4.7) to generate a .ui-file and am now trying to use it for a Quantum GIS plugin that's written in Python 2.5 (and running in the Quantum GIS Python 2.5 console)). I am running into trouble when loading the ui-file dynamically when the program runs the loadUi() function. What throws me off is that the error occurs outside my script. Does that mean, I'm passing something wrong into it? Where does the error come in? Any hints on what could be wrong? code_dir = os.path.dirname(os.path.abspath(__file__)) self.ui = loadUi(os.path.join(code_dir, "Ui_myfile.ui"), self) This is the Error Code I am getting (minus the first paragraph): File "C:/Dokumente und Einstellungen/name.name/.qgis/python/plugins\myfile\myfile_gui.py", line 42, in __ init __ self.ui = loadUi(os.path.join(code_dir, "Ui_myfile.ui"), self) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic__init__.py", line 112, in loadUi return DynamicUILoader().loadUi(uifile, baseinstance) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\Loader\loader.py", line 21, in loadUi return self.parse(filename) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\uiparser.py", line 768, in parse actor(elem) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\uiparser.py", line 616, in createUserInterface self.traverseWidgetTree(elem) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\uiparser.py", line 594, in traverseWidgetTree handler(self, child) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\uiparser.py", line 233, in createWidget topwidget.setCentralWidget(widget) SystemError: error return without exception set

    Read the article

  • Current trends in Random Access Memory speed [closed]

    - by Vetal
    As I know for now because of laws of Physics there will be not any tangible improvements in CPU cycles per second for the nearest future. However because of Von Neumann bottleneck it seems to not be an issue for non-server applications. So what about RAM, is there any upcoming technologies that promise to improve memory speed or we are stack with the current situation till quantum computers will come out from labs?

    Read the article

  • Do Not Optimize Without Measuring

    - by Alois Kraus
    Recently I had to do some performance work which included reading a lot of code. It is fascinating with what ideas people come up to solve a problem. Especially when there is no problem. When you look at other peoples code you will not be able to tell if it is well performing or not by reading it. You need to execute it with some sort of tracing or even better under a profiler. The first rule of the performance club is not to think and then to optimize but to measure, think and then optimize. The second rule is to do this do this in a loop to prevent slipping in bad things for too long into your code base. If you skip for some reason the measure step and optimize directly it is like changing the wave function in quantum mechanics. This has no observable effect in our world since it does represent only a probability distribution of all possible values. In quantum mechanics you need to let the wave function collapse to a single value. A collapsed wave function has therefore not many but one distinct value. This is what we physicists call a measurement. If you optimize your application without measuring it you are just changing the probability distribution of your potential performance values. Which performance your application actually has is still unknown. You only know that it will be within a specific range with a certain probability. As usual there are unlikely values within your distribution like a startup time of 20 minutes which should only happen once in 100 000 years. 100 000 years are a very short time when the first customer tries your heavily distributed networking application to run over a slow WIFI network… What is the point of this? Every programmer/architect has a mental performance model in his head. A model has always a set of explicit preconditions and a lot more implicit assumptions baked into it. When the model is good it will help you to think of good designs but it can also be the source of problems. In real world systems not all assumptions of your performance model (implicit or explicit) hold true any longer. The only way to connect your performance model and the real world is to measure it. In the WIFI example the model did assume a low latency high bandwidth LAN connection. If this assumption becomes wrong the system did have a drastic change in startup time. Lets look at a example. Lets assume we want to cache some expensive UI resource like fonts objects. For this undertaking we do create a Cache class with the UI themes we want to support. Since Fonts are expensive objects we do create it on demand the first time the theme is requested. A simple example of a Theme cache might look like this: using System; using System.Collections.Generic; using System.Drawing; struct Theme { public Color Color; public Font Font; } static class ThemeCache { static Dictionary<string, Theme> _Cache = new Dictionary<string, Theme> { {"Default", new Theme { Color = Color.AliceBlue }}, {"Theme12", new Theme { Color = Color.Aqua }}, }; public static Theme Get(string theme) { Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } return cached; } } class Program { static void Main(string[] args) { Theme item = ThemeCache.Get("Theme12"); item = ThemeCache.Get("Theme12"); } } This cache does create font objects only once since on first retrieve of the Theme object the font is added to the Theme object. When we let the application run it should print “Creating new font” only once. Right? Wrong! The vigilant readers have spotted the issue already. The creator of this cache class wanted to get maximum performance. So he decided that the Theme object should be a value type (struct) to not put too much pressure on the garbage collector. The code Theme cached = _Cache[theme]; if (cached.Font == null) { Console.WriteLine("Creating new font"); cached.Font = new Font("Arial", 8); } does work with a copy of the value stored in the dictionary. This means we do mutate a copy of the Theme object and return it to our caller. But the original Theme object in the dictionary will have always null for the Font field! The solution is to change the declaration of struct Theme to class Theme or to update the theme object in the dictionary. Our cache as it is currently is actually a non caching cache. The funny thing was that I found out with a profiler by looking at which objects where finalized. I found way too many font objects to be finalized. After a bit debugging I found the allocation source for Font objects was this cache. Since this cache was there for years it means that the cache was never needed since I found no perf issue due to the creation of font objects. the cache was never profiled if it did bring any performance gain. to make the cache beneficial it needs to be accessed much more often. That was the story of the non caching cache. Next time I will write something something about measuring.

    Read the article

  • Multilevel Queue Scheduling (MQS) with Round Robin

    - by stackuser
    I'm trying to use MQS to create a Gantt chart of 5 processes (P1-P5) as well as their waiting, response, and turnaround times (and averages of those metrics) within a CPU task schedule. Here's the basic table of arrival times and bursts: Here's my actual work version after ticking off the finished processes. The time quantum for each time slice is (2 queues) TQ1=4 and TQ2=3. Note that I'm doing MQS and NOT MLFQ: It just doesn't feel like I'm doing MQS right here, I know this gets a little complex but maybe someone can point out where I'm going totally wrong.

    Read the article

  • MAAS and Openstack Network

    - by user281985
    Trying to figure out the best way for openstack and MASS networks to co-exist. Assuming MAAS used 10.0.0.0/24 for provisioning of various openstack nodes, once openstack, with quantum, is deployed should 10.0.0.0 be used as management network, external network, both or discarded? Reason for the question being that I ran a deployment where openstack used 10.0.0.0 as its management network, MAAS dhcp and dns were active; however, I could only access VMs through namespace and was not able to utilize MAAS's dhcp to assign floating IP. (VMs network was through an internal bridge 10.10.10.10, and external bridge was using the same interface as MAAS.) Any thoughts or ideas are appreciated.

    Read the article

  • OpenStack: How to make Cloudify use the floating IP instead of the fixed one?

    - by polslinux
    I have a problem with Cloudify (both 2.5 and 2.6-rc release). I have an All-In-One Openstack 2013.1.1 setup and I'm trying to use Cloudify to bootstrap a cirros 0.3.1 vm. My quantum configuration is: pool of fixed ip (10.0.0.0/24) for vm management; pool of floating ip (192.168.1.170-190) taken from 192.168.1.1/24 (my lan) When I deploy a vm first, an ip from 10.0.0.0/24 is given (I cannot reach it from my PCs because it is only for vm management) and then I associate a floating ip with which I can ping (and ssh) the deployed machine. The problem is when I do: bootstrap-cloud openstack because Cloudify stay forever into "attempting to access management vm 10.0.0.3" and this is due to the fact that 10.0.0.3 is not reachable. What can I do to get Cloudify take the floating ip instead of the fixed one?

    Read the article

  • Is there a difference between multi-tasking and time-sharing?

    - by Dummy Derp
    Just going over my school notes, my teacher identifies multi-tasking OS, and time-sharing OS as two different things. I really don't see a difference between the two. MULTI-TASKING: You load a number of programs in the memory and execute them. You execute another program if the time quantum allocated to the current program expires OR if it goes on to do I/O and leaves the CPU OR if it finishes execution. TIME-SHARING: the same,again. The same applies in case of serial processing and batch processing. Although they are the same, I guess the only difference would be the way in which control information is passed to the CPU. Maybe, and again MAYBE, in serial processing you need to provide the punch cards with all the processes while in batch, the entire batch uses the same set of control information. Like all the print jobs would have the same control information.

    Read the article

  • Convert a Delphi example using TDatabase and local paradox table to server storage

    - by Brian Frost
    I am looking at the Developer Express Quantum Grid example 'IssueList' which is a useful bug reporting and tracking application that's almost ready to go out of the box. It uses a TDatabase component with several paradox (.db) tables. Is it simple to rejig the TDatabase settings to use a database on a shared machine so that several of us can access it together across the network? If so, what would be the steps needed please?

    Read the article

  • Recommendations for a free GIS library supporting raster images

    - by gspr
    Hi. I'm quite new to the whole field of GIS, and I'm about to make a small program that essentially overlays GPS tracks on a map together with some other annotations. I primarily need to allow scanned (thus raster) maps (although it would be nice to support proper map formats and something like OpenStreetmap in the long run). My first exploratory program uses Qt's graphics view framework and overlays the GPS points by simply projecting them onto the tangent plane to the WGS84 ellipsoid at a calibration point. This gives half-decent accuracy, and actually looks good. But then I started wondering. To get the accuracy I need (i.e. remove the "half" in "half-decent"), I have to correct for the map projection. While the math is not a problem in itself, supporting many map projection feels like needless work. Even though a few projections would probably be enough, I started thinking about just using something like the PROJ.4 library to do my projections. But then, why not take it all the way? Perhaps I might aswell use a full-blown map library such as Mapnik (edit: Quantum GIS also looks very nice), which will probably pay off when I start to want even more fancy annotations or some other symptom of featuritis. So, finally, to the question: What would you do? Would you use a full-blown map library? If so, which one? Again, it's important that it supports using (and zooming in and out with) raster maps and has pretty overlay features. Or would you just keep it simple, and go with Qt's own graphics view framework together with something like PROJ.4 to handle the map projections? I appreciate any feedback! Some technicalities: I'm writing in C++ with a Qt-based GUI, so I'd prefer something that plays relatively nicely with those. Also, the library must be free software (as in FOSS), and at least decently cross-platform (GNU/Linux, Windows and Mac, at least). Edit: OK, it seems I didn't do quite enough research before asking this question. Both Quantum GIS and Mapnik seem very well suited for my purpose. The former especially so since it's based on Qt.

    Read the article

  • Unexpected ArrayIndexOutOfBoundsException in JavaFX application, refering to no array

    - by Eugene
    I have the following code: public void setContent(Importer3D importer) { if (DEBUG) { System.out.println("Initialization of Mesh's arrays"); } coords = importer.getCoords(); texCoords = importer.getTexCoords(); faces = importer.getFaces(); if (DEBUG) { System.out.println("Applying Mesh's arrays"); } mesh = new TriangleMesh(); mesh.getPoints().setAll(coords); mesh.getTexCoords().setAll(texCoords); mesh.getFaces().setAll(faces); if (DEBUG) { System.out.println("Initialization of the material"); } initMaterial(); if (DEBUG) { System.out.println("Setting the MeshView"); } meshView.setMesh(mesh); meshView.setMaterial(material); meshView.setDrawMode(DrawMode.FILL); if (DEBUG) { System.out.println("Adding to 3D scene"); } root3d.getChildren().clear(); root3d.getChildren().add(meshView); if (DEBUG) { System.out.println("3D model is ready!"); } } The Imporeter3D class part: private void load(File file) { stlLoader = new STLLoader(file); } public float[] getCoords() { return stlLoader.getCoords(); } public float[] getTexCoords() { return stlLoader.getTexCoords(); } public int[] getFaces() { return stlLoader.getFaces(); } The STLLoader: public class STLLoader{ public STLLoader(File file) { stlFile = new STLFile(file); loadManager = stlFile.loadManager; pointsArray = new PointsArray(stlFile); texCoordsArray = new TexCoordsArray(); } public float[] getCoords() { return pointsArray.getPoints(); } public float[] getTexCoords() { return texCoordsArray.getTexCoords(); } public int[] getFaces() { return pointsArray.getFaces(); } private STLFile stlFile; private PointsArray pointsArray; private TexCoordsArray texCoordsArray; private FacesArray facesArray; public SimpleBooleanProperty finished = new SimpleBooleanProperty(false); public LoadManager loadManager;} PointsArray file: public class PointsArray { public PointsArray(STLFile stlFile) { this.stlFile = stlFile; initPoints(); } private void initPoints() { ArrayList<Double> pointsList = stlFile.getPoints(); ArrayList<Double> uPointsList = new ArrayList<>(); faces = new int[pointsList.size()*2]; int n = 0; for (Double d : pointsList) { if (uPointsList.indexOf(d) == -1) { uPointsList.add(d); } faces[n] = uPointsList.indexOf(d); faces[++n] = 0; n++; } int i = 0; points = new float[uPointsList.size()]; for (Double d : uPointsList) { points[i] = d.floatValue(); i++; } } public float[] getPoints() { return points; } public int[] getFaces() { return faces; } private float[] points; private int[] faces; private STLFile stlFile; public static boolean DEBUG = true; } And STLFile: ArrayList<Double> coords = new ArrayList<>(); double temp; private void readV(STLParser parser) { for (int n = 0; n < 3; n++) { if(!(parser.ttype==STLParser.TT_WORD && parser.sval.equals("vertex"))) { System.err.println("Format Error:expecting 'vertex' on line " + parser.lineno()); } else { if (parser.getNumber()) { temp = parser.nval; coords.add(temp); if(DEBUG) { System.out.println("Vertex:"); System.out.print("X=" + temp + " "); } if (parser.getNumber()) { temp = parser.nval; coords.add(temp); if(DEBUG) { System.out.print("Y=" + temp + " "); } if (parser.getNumber()) { temp = parser.nval; coords.add(temp); if(DEBUG) { System.out.println("Z=" + temp + " "); } readEOL(parser); } else System.err.println("Format Error: expecting coordinate on line " + parser.lineno()); } else System.err.println("Format Error: expecting coordinate on line " + parser.lineno()); } else System.err.println("Format Error: expecting coordinate on line " + parser.lineno()); } if (n < 2) { try { parser.nextToken(); } catch (IOException e) { System.err.println("IO Error on line " + parser.lineno() + ": " + e.getMessage()); } } } } public ArrayList<Double> getPoints() { return coords; } As a result of all of this code, I expected to get 3d model in MeshView. But the present result is very strange: everything works and in DEBUG mode I get 3d model is ready! from setContent(), and then unexpected ArrayIndexOutOfBoundsException: File readed Initialization of Mesh's arrays Applying Mesh's arrays Initialization of the material Setting the MeshView Adding to 3D scene 3D model is ready! java.lang.ArrayIndexOutOfBoundsException: Array index out of range: 32252 at com.sun.javafx.collections.ObservableFloatArrayImpl.rangeCheck(ObservableFloatArrayImpl.java:276) at com.sun.javafx.collections.ObservableFloatArrayImpl.get(ObservableFloatArrayImpl.java:184) at javafx.scene.shape.TriangleMesh.computeBounds(TriangleMesh.java:262) at javafx.scene.shape.MeshView.impl_computeGeomBounds(MeshView.java:151) at javafx.scene.Node.updateGeomBounds(Node.java:3497) at javafx.scene.Node.getGeomBounds(Node.java:3450) at javafx.scene.Node.getLocalBounds(Node.java:3432) at javafx.scene.Node.updateTxBounds(Node.java:3510) at javafx.scene.Node.getTransformedBounds(Node.java:3350) at javafx.scene.Node.updateBounds(Node.java:516) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.SubScene.updateBounds(SubScene.java:556) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Parent.updateBounds(Parent.java:1668) at javafx.scene.Scene$ScenePulseListener.pulse(Scene.java:2309) at com.sun.javafx.tk.Toolkit.firePulse(Toolkit.java:329) at com.sun.javafx.tk.quantum.QuantumToolkit.pulse(QuantumToolkit.java:479) at com.sun.javafx.tk.quantum.QuantumToolkit.pulse(QuantumToolkit.java:459) at com.sun.javafx.tk.quantum.QuantumToolkit$13.run(QuantumToolkit.java:326) at com.sun.glass.ui.InvokeLaterDispatcher$Future.run(InvokeLaterDispatcher.java:95) at com.sun.glass.ui.win.WinApplication._runLoop(Native Method) at com.sun.glass.ui.win.WinApplication.access$300(WinApplication.java:39) at com.sun.glass.ui.win.WinApplication$3$1.run(WinApplication.java:101) at java.lang.Thread.run(Thread.java:724) Exception in thread "JavaFX Application Thread" java.lang.ArrayIndexOutOfBoundsException: Array index out of range: 32252 at com.sun.javafx.collections.ObservableFloatArrayImpl.rangeCheck(ObservableFloatArrayImpl.java:276) at com.sun.javafx.collections.ObservableFloatArrayImpl.get(ObservableFloatArrayImpl.java:184) The stranger thing is that this stack doesn't stop until I close the program. And moreover it doesn't point to any my array. What is this? And why does it happen?

    Read the article

  • How to parse a directory tree in python?

    - by chutsu
    I have a directory called "notes" within the notes I have categories which are named "science", "maths" ... within those folder are sub-categories, such as "Quantum Mechanics", "Linear Algebra". ./notes --> ./notes/maths ------> ./notes/maths/linear_algebra --> ./notes/physics/ ------> ./notes/physics/quantum_mechanics My problem is that I don't know how to put the categories and subcategories into TWO SEPARATE list/array.

    Read the article

  • How long does it take each thread timeslice in Windows XP ?

    - by IHawk
    I am trying to find out how long does it take each thread timeslice (quantum) in Windows but the only information that I found out is about the clock ticks being from 15 to 20ms or 20-30ms. How can I find this information ? I think it may vary from OS to OS, but I am not certain. I appreciate any suggestion on this subject. Thank you.

    Read the article

  • This company buries Ashes on Space for $3000

    - by Gopinath
    Does Space burials sounds crazy to you? Then you may not be a big fan of science fictions or a Japanese. According to a study conducted by NASA many science fiction fans prefer their final rights to be held on space and you can read more details about the research over here on NASA website. The other people who fancy about space burials are Japanese Buddhists. For those who are not aware of Space burials, it’s a procedure in which a small sample of the cremated ashes of the deceased are launched into space using spacecraft. The spacecraft will remain in orbit around the Earth or other planets  for decades and eventually burning up in the atmosphere. Celestis, an US based company, is pioneer in memorial spaceflight business and so far they have conducted a total of 10 space burials. Few of the famous people buried in space are Gene Roddenberry(creator of Star Trek),  Gerard K. O’Neill (space physicist), Clyde Tombaugh (astronomer and discoverer of Pluto)  and complete list is available on this Wikipedia page In the coming months Celestis have planned for a  launch of its latest memorial spacecraft and you can send your loved one’s remains for just $3000. Once they put the ashes on space they will also let you track the location of the spacecraft in orbit using a real time feed. Story via BBC and cc image credit: flickr/gsfc

    Read the article

  • CERN Announces the Discovery of a Higgs-Boson-like Particle

    - by Jason Fitzpatrick
    CERN scientists dropped a press release today indicating they’ve found a particle consistent with the long sought after Higgs Boson particle–the “God” particle, that could help radically refine our understanding of Standard Model of Particle Physics. For years scientists at CERN have been harnessing the power of the Large Hadron Collider to answer fundamental questions about the nature of particles and the universe around us. In the above video John Ellis, a theoretical physicist, answers the question “What is the Higgs Boson?” The video pairs nicely with the CERN press release: “We observe in our data clear signs of a new particle, at the level of 5 sigma, in the mass region around 126 GeV. The outstanding performance of the LHC and ATLAS and the huge efforts of many people have brought us to this exciting stage,” said ATLAS experiment spokesperson Fabiola Gianotti, “but a little more time is needed to prepare these results for publication.” “The results are preliminary but the 5 sigma signal at around 125 GeV we’re seeing is dramatic. This is indeed a new particle. We know it must be a boson and it’s the heaviest boson ever found,” said CMS experiment spokesperson Joe Incandela. “The implications are very significant and it is precisely for this reason that we must be extremely diligent in all of our studies and cross-checks.” “It’s hard not to get excited by these results,” said CERN Research Director Sergio Bertolucci. “ We stated last year that in 2012 we would either find a new Higgs-like particle or exclude the existence of the Standard Model Higgs. With all the necessary caution, it looks to me that we are at a branching point: the observation of this new particle indicates the path for the future towards a more detailed understanding of what we’re seeing in the data.” How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • How can I make sure my evening project code is mine?

    - by Sebastian
    I'm a physicist with a CS degree and just started my PhD at a tech company (wanted to do applied research). It deals with large scale finite element simulations. After reviewing their current approach, I think that a radically different method has to be applied (they are using a commercial tool which is very limited). I'd rather base my research on an open source finite element solver and write a program which makes use of it. I'd like to develop this idea in the evenings, because that's the time that best suits me for programming (during the day I prefer reading and maths) and use it at a late stage of my PhD. I'd like to have the option to release my program as open source on my website as a reference, for future personal or even commercial (e.g. consulting) use. How can I make sure that my company doesn't claim the code ownership? I don't really I thought that a version control system could help (check out only in the evening). This would document that I programmed not during regular office hours (documented elsewhere). But these data can be easily manufactured. Any other ideas? I want to stress that I'm not interested in selling software. Jurisdiction is EU, if that matters. Thank you.

    Read the article

  • Port scientific software to GPU and publish it

    - by Werner
    Hi, let's say that I am a physicist and that I am the master of the universe when it comes to port salready existing oftware to GPU's with 100x or more speedups. Let's say that I find that some other scientist, which does not know how to program GPU, publishes the Open Source code in his/her website of a physical simulation program, in the field I am expert on. Let's say that I realize "I can port that code to GPU", and I suggest him, but he shows no interest. My interest here is, 1) to port it to GPU, 2) to publish this result in a scientific journal related with physics and/or computer science My question for you is 1- would you proceed here to port the code to GPU (or other new arch) and publish it? 2- how would you do it and which journal do you suggest? Thanks

    Read the article

< Previous Page | 1 2 3 4  | Next Page >