Search Results

Search found 1803 results on 73 pages for 'pi calculation'.

Page 58/73 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Some sites won't load on Ubuntu/Mint

    - by Or W
    I have a REALLY weird problem with either my network or my OS. Last week I've suddenly had difficulties loading some websites or even more odd some parts of different websites. For example, I could load gmail.com, login and view the list of emails in my inbox but when I clicked one of them it would just time out. Another example is http://www.ynet.co.il, I can view the home page but going into any one of the articles fails (times out). I've tried Chrome, Firefox and Opera, all fail the same way. If I take a URL of a page I cannot load via the browser and try to wget it though the console I get the file just fine. I've formatted my machine (Used to run Ubuntu 13.04) and installed Mint Linux this time, it worked fine for a few days and now, again, having the same exact issues. Important to note that I have other machines connected either directly or via Wi-Fi to the router and they are all working fine (two win7 machines and 1 raspberry pi). Another strange behavior is that I can ftp or ssh to remote machines but cannot send files via ftp (times out) even if I set passive mode ON and when using ssh I can do just about anything but I cannot paste text into the remote machine, for example if I nano a file on the remote machine and try to paste anything from my clipboard it freezes. What I've tried so far: Disable IPv6 on the networking admin (and on firefox disabling ipv6 on the about:config page) Changing the port and the network cable I went to the store and bought a new standalone PCIe network adapter Connected my win7 laptop using the same cable and router port (sites that were not working on my Mint are working just fine on the win7 machine) Loaded Mint from a livecd, got the same result Tried changing the MTU (was 1500, tried 1492) Some observations: When I clear my browser cache and go to facebook.com for example, the homepage loads but I fail to load any profile/group page. If I refresh facebook.com homepage a couple of times it stops and fails to load until I clear my browser cache. I changed the chrome cache folder permissions to 0777 but that did not help. When I run netstat -n I see A LOT of connections that are in 'FIN_WAIT' mode (I'm guessing that's when I try to refresh pages that are not working and timing out), I have no idea what it means or if it helps anyone figure out what's wrong. The sites that are not loading correctly are always that same, they don't vary or anything and they fail to load exactly the same way on all three browsers that I've tried. When I Googled 'Ubuntu some sites not loading' I see a huge amount of complaints just like mine, but none of them that I could find actually says what the problem is or how they fixed it. Technical stuff: netstat -n ps aux netstat -nr

    Read the article

  • Vim digraphs is not working

    - by Hauleth
    I've tried to input digraphs in Vim (without vi compatibility), and unfortunately I can't. After using ControlKP * it should input big greek pi letter. I've also tried using PBS * with :set digraph, but no success either. My gvim --version output: VIM - Vi IMproved 7.3 (2010 Aug 15, compiled Oct 23 2012 18:42:18) Included patches: 1-712 Compiled by ArchLinux Big version with GTK2 GUI. Features included (+) or not (-): +arabic +autocmd +balloon_eval +browse ++builtin_terms +byte_offset +cindent +clientserver +clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +conceal +cryptv +cscope +cursorbind +cursorshape +dialog_con_gui +diff +digraphs +dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() +gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +lua +menu +mksession +modify_fname +mouse +mouseshape +mouse_dec +mouse_gpm -mouse_jsbterm +mouse_netterm +mouse_sgr -mouse_sysmouse +mouse_urxvt +mouse_xterm +multi_byte +multi_lang -mzscheme +netbeans_intg +path_extra +perl +persistent_undo +postscript +printer -profile +python -python3 +quickfix +reltime +rightleft +ruby +scrollbind +signs +smartindent -sniff +startuptime +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title +toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup +X11 -xfontset +xim +xsmp_interact +xterm_clipboard -xterm_save system vimrc file: "/etc/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" system gvimrc file: "/etc/gvimrc" user gvimrc file: "$HOME/.gvimrc" system menu file: "$VIMRUNTIME/menu.vim" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -DFEAT_GUI_GTK -pthread -I/usr/include/gtk-2.0 -I/usr/lib/gtk-2.0/include -I/usr/include/atk-1.0 -I/usr/include/cairo -I/usr/include/gdk-pixbuf-2.0 -I/usr/include/pango-1.0 -I/usr/include/glib-2.0 -I/usr/lib/glib-2.0/include -I/usr/include/pixman-1 -I/usr/include/freetype2 -I/usr/include/libpng15 -I/usr/local/include -march=x86-64 -mtune=generic -O2 -pipe -fstack-protector --param=ssp-buffer-size=4 -U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 Linking: gcc -L. -Wl,-O1,--sort-common,--as-needed,-z,relro -rdynamic -Wl,-export-dynamic -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -L/usr/local/lib -Wl,--as-needed -o vim -lgtk-x11-2.0 -lgdk-x11-2.0 -latk-1.0 -lgio-2.0 -lpangoft2-1.0 -lpangocairo-1.0 -lgdk_pixbuf-2.0 -lcairo -lpango-1.0 -lfreetype -lfontconfig -lgobject-2.0 -lglib-2.0 -lSM -lICE -lXt -lX11 -lXdmcp -lSM -lICE -lm -lncurses -lnsl -lacl -lattr -lgpm -ldl -L/usr/lib -llua -Wl,-E -Wl,-rpath,/usr/lib/perl5/core_perl/CORE -Wl,-O1,--sort-common,--as-needed,-z,relro -fstack-protector -L/usr/local/lib -L/usr/lib/perl5/core_perl/CORE -lperl -lnsl -ldl -lm -lcrypt -lutil -lpthread -lc -L/usr/lib/python2.7/config -lpython2.7 -lpthread -ldl -lutil -lm -Xlinker -export-dynamic -lruby -lpthread -lrt -ldl -lcrypt -lm -L/usr/lib Can you tell me how to get my ControlK digraphs work? EDIT: Output of :verbose set enc? fenc? is: encoding=utf-8 Last set from ~/.vimrc fileencoding=

    Read the article

  • VirtualBox Port Forward not working when Guest IP *IS* specified (while doc says opposite)

    - by Patrick
    Trying to port forward from host (Mac OS X) 127.0.0.1:8282 - guest (CentOS)'s 10.10.10.10:8080. Existing port forwards include 127.0.0.1:8181 and 9191 to guest without any IP specified (so whatever it gets through DHCP, as explained in the documentation). Here is how the non-working binding was added: VBoxManage modifyvm "VM name" --natpf1 "rule3,tcp,127.0.0.1,8282,10.10.10.10,8080" Here is how the working ones were added: VBoxManage modifyvm "VM name" --natpf1 "rule1,tcp,127.0.0.1,8181,,80" VBoxManage modifyvm "VM name" --natpf1 "rule2,tcp,127.0.0.1,9191,,9090" And by "non-working", I of course mean not listening (as a prerequisite to forwarding): $ lsof -Pi -n|grep Virtual|grep LISTEN VirtualBo 27050 user 21u IPv4 0x2bbdc68fd363175d 0t0 TCP 127.0.0.1:9191 (LISTEN) VirtualBo 27050 user 22u IPv4 0x2bbdc68fd0e0af75 0t0 TCP 127.0.0.1:8181 (LISTEN) There should be a similar line above but with 127.0.0.1:8282. Just to be clear, this port is listening perfectly fine on the guest itself. And when I remove the guest IP (i.e., clear the 10.10.10.10) the forward works fine, albeit to eth0 (not eth1 where I need it). I can tcpdump and watch the traffic flow back and forth. And yes, I've disabled iptables entirely while testing -- it's not getting blocked anywhere on the guest. As VirtualBox writes in their documentation, you are required to specify the guest IP if it's static (makes sense, no DHCP record it keeps): "If for some reason the guest uses a static assigned IP address not leased from the built-in DHCP server, it is required to specify the guest IP when registering the forwarding rule:". However, doing so (as I need to), seems to break the port forward with nary a report in any log file I can find. (I've reviewed everything in ~/Library/VirtualBox/). Other notes: While I used the above command to add the third rule, I've also verified it showed up correctly in GUI and then removed/re-added from there just to make sure). This forum link -- while very dated -- looks somewhat related in that a port forward to a static IP was not appearing (perhaps they think due to lack of gratuitous arp being sent for host to know IP is there/avail?). Anyway, what gives? Is this still buggy? Any suggestions? If not, easy enough workarounds? What's interesting is that this works perfectly fine on another user's Mac, however he's running a slightly older version (4.3.6 v. 4.3.12).

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Improved appointment rendering in RadScheduler for ASP.NET AJAX, Q1 2010

    Now that Q1 2010 release is out in the wild, we can sit down and discuss some of the changes we decided to make in the new release. One of them is the new appointment rendering of RadScheduler - a potentially breaking change, but a much needed one. If you have problems with your old custom skins, include the old base stylesheet along with your RadScheduler and set EnableEmbeddedBaseStylesheet=false in your RadScheduler. You can find the said base stylesheet attached to this post.   While trying to improve the performance of RadScheduler, I noticed that the number of resources slows down the rendering and overall performance considerably. This had to be expected - the images to support the appointment rounded corners (and the predefined resources) were quite large. However, I didnt take into account that all browsers keep for performance reasons their images uncompressed in memory and with the color depth of the current desktop. A simple calculation later I discovered that the appointment sprite itself is taking 25MB memory when loaded. Add 5 resources to the fray and you have 150MB memory down with a single blow. As it turns out - a sprite image is not a panacea, if it gets too big - dont be afraid to break it in two. The loading time may suffer, but your browser suffers more while rendering a 25MB monster. First I thought of undertaking the aforementioned solution - breaking the appointment sprite in two and thus reducing the two appointment sprites to mere 2MB uncompressed. Then I thought - the rounded corners are small - I can use borders and backgrounds to simulate rounded appointment borders while still keeping the same HTML structure. The gradients can be done with a single 10x50px image plus we have a gain - border colors and backgrounds can be changed on the fly.  I started with five rendering elements at first, then tried with four and finally I settled on only three elements.  Behold the new appointment rendering (quite simple really):       On the left you can see that the first container has only top and bottom borders and a background. In fact, the background isnt even needed since it will be obscured by the elements on top of it. The whole first container is only needed for the four dots that reside in the four corners of the appointment. On top of this container is another one that holds the left and right borders and slightly lighter background to create the illusion of a second lighter border beside the other two. At last on top of all others is placed the text container that also holds the top and bottom borders and the gradient background. On the right you can see the final result - Im quite happy with it and I hope you will be too. After creating the new rendering we took another step further - we decided to use alpha gradients for the resource rendering, thus supporting any color appointments with rounded corners and gradients. You can see some examples below:We plan on adding BorderColor and BackColor properties  to the ResourceStyles definitions for Q1 SP1. However with the new rendering in Q1 2010 we do support BackColor and BorderColor appointment properties - you only need to set AppointmentStyleMode=Default to keep RadScheduler from switching to Simple appointment rendering. Here is one screenshot of RadScheduler with appointments set to different colors: I hope that you will enjoy working with the new appointments in RadScheduler. RadScheduler base stylesheet Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Advanced Data Source Engine coming to Telerik Reporting Q1 2010

    This is the final blog post from the pre-release series. In it we are going to share with you some of the updates coming to our reporting solution in Q1 2010. A new Declarative Data Source Engine will be added to Telerik Reporting, that will allow full control over data management, and deliver significant gains in rendering performance and memory consumption. Some of the engines new features will be: Data source parameters - those parameters will be used to limit data retrieved from the data source to just the data needed for the report. Data source parameters are processed on the data source side, however only queried data is fetched to the reporting engine, rather than the full data source. This leads to lower memory consumption, because data operations are performed on queried data only, rather than on all data. As a result, only the queried data needs to be stored in the memory vs. the whole dataset, which was the case with the old approach Support for stored procedures - they will assist in achieving a consistent implementation of logic across applications, and are especially practical for performing repetitive tasks. A stored procedure stores the SQL statements and logic, which can then be executed in different reports and/or applications. Stored Procedures will not only save development time, but they will also improve performance, because each stored procedure is compiled on the data base server once, and then is reutilized. In Telerik Reporting, the stored procedure will also be parameterized, where elements of the SQL statement will be bound to parameters. These parameterized SQL queries will be handled through the data source parameters, and are evaluated at run time. Using parameterized SQL queries will improve the performance and decrease the memory footprint of your application, because they will be applied directly on the database server and only the necessary data will be downloaded on the middle tier or client machine; Calculated fields through expressions - with the help of the new reporting engine you will be able to use field values in formulas to come up with a calculated field. A calculated field is a user defined field that is computed "on the fly" and does not exist in the data source, but can perform calculations using the data of the data source object it belongs to. Calculated fields are very handy for adding frequently used formulas to your reports; Improved performance and optimized in-memory OLAP engine - the new data source will come with several improvements in how aggregates are calculated, and memory is managed. As a result, you may experience between 30% (for simpler reports) and 400% (for calculation-intensive reports) in rendering performance, and about 50% decrease in memory consumption. Full design time support through wizards - Declarative data sources are a great advance and will save developers countless hours of coding. In Q1 2010, and true to Telerik Reportings essence, using the new data source engine and its features requires little to no coding, because we have extended most of the wizards to support the new functionality. The newly extended wizards are available in VS2005/VS2008/VS2010 design-time. More features will be revealed on the product's what's new page when the new version is officially released in a few days. Also make sure you attend the free webinar on Thursday, March 11th that will be dedicated to the updates in Telerik Reporting Q1 2010. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • What is the most efficient way to convert to binary and back in C#?

    - by Saad Imran.
    I'm trying to write a general purpose socket server for a game I'm working on. I know I could very well use already built servers like SmartFox and Photon, but I wan't to go through the pain of creating one myself for learning purposes. I've come up with a BSON inspired protocol to convert the the basic data types, their arrays, and a special GSObject to binary and arrange them in a way so that it can be put back together into object form on the client end. At the core, the conversion methods utilize the .Net BitConverter class to convert the basic data types to binary. Anyways, the problem is performance, if I loop 50,000 times and convert my GSObject to binary each time it takes about 5500ms (the resulting byte[] is just 192 bytes per conversion). I think think this would be way too slow for an MMO that sends 5-10 position updates per second with a 1000 concurrent users. Yes, I know it's unlikely that a game will have a 1000 users on at the same time, but like I said earlier this is supposed to be a learning process for me, I want to go out of my way and build something that scales well and can handle at least a few thousand users. So yea, if anyone's aware of other conversion techniques or sees where I'm loosing performance I would appreciate the help. GSBitConverter.cs This is the main conversion class, it adds extension methods to main datatypes to convert to the binary format. It uses the BitConverter class to convert the base types. I've shown only the code to convert integer and integer arrays, but the rest of the method are pretty much replicas of those two, they just overload the type. public static class GSBitConverter { public static byte[] ToGSBinary(this short value) { return BitConverter.GetBytes(value); } public static byte[] ToGSBinary(this IEnumerable<short> value) { List<byte> bytes = new List<byte>(); short length = (short)value.Count(); bytes.AddRange(length.ToGSBinary()); for (int i = 0; i < length; i++) bytes.AddRange(value.ElementAt(i).ToGSBinary()); return bytes.ToArray(); } public static byte[] ToGSBinary(this bool value); public static byte[] ToGSBinary(this IEnumerable<bool> value); public static byte[] ToGSBinary(this IEnumerable<byte> value); public static byte[] ToGSBinary(this int value); public static byte[] ToGSBinary(this IEnumerable<int> value); public static byte[] ToGSBinary(this long value); public static byte[] ToGSBinary(this IEnumerable<long> value); public static byte[] ToGSBinary(this float value); public static byte[] ToGSBinary(this IEnumerable<float> value); public static byte[] ToGSBinary(this double value); public static byte[] ToGSBinary(this IEnumerable<double> value); public static byte[] ToGSBinary(this string value); public static byte[] ToGSBinary(this IEnumerable<string> value); public static string GetHexDump(this IEnumerable<byte> value); } Program.cs Here's the the object that I'm converting to binary in a loop. class Program { static void Main(string[] args) { GSObject obj = new GSObject(); obj.AttachShort("smallInt", 15); obj.AttachInt("medInt", 120700); obj.AttachLong("bigInt", 10900800700); obj.AttachDouble("doubleVal", Math.PI); obj.AttachStringArray("muppetNames", new string[] { "Kermit", "Fozzy", "Piggy", "Animal", "Gonzo" }); GSObject apple = new GSObject(); apple.AttachString("name", "Apple"); apple.AttachString("color", "red"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.5); GSObject lemon = new GSObject(); apple.AttachString("name", "Lemon"); apple.AttachString("color", "yellow"); apple.AttachBool("inStock", false); apple.AttachFloat("price", (float)0.8); GSObject apricoat = new GSObject(); apple.AttachString("name", "Apricoat"); apple.AttachString("color", "orange"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)1.9); GSObject kiwi = new GSObject(); apple.AttachString("name", "Kiwi"); apple.AttachString("color", "green"); apple.AttachBool("inStock", true); apple.AttachFloat("price", (float)2.3); GSArray fruits = new GSArray(); fruits.AddGSObject(apple); fruits.AddGSObject(lemon); fruits.AddGSObject(apricoat); fruits.AddGSObject(kiwi); obj.AttachGSArray("fruits", fruits); Stopwatch w1 = Stopwatch.StartNew(); for (int i = 0; i < 50000; i++) { byte[] b = obj.ToGSBinary(); } w1.Stop(); Console.WriteLine(BitConverter.IsLittleEndian ? "Little Endian" : "Big Endian"); Console.WriteLine(w1.ElapsedMilliseconds + "ms"); } Here's the code for some of my other classes that are used in the code above. Most of it is repetitive. GSObject GSArray GSWrappedObject

    Read the article

  • CRMIT’s HIGH VALUE CRM++ PLUGINS FOR CRM On DEMAND

    - by Soumo Das
    Customer satisfaction and experience being the two most considerable factors, these days businesses are on the lookout for automation tools that are world class, agile and keep quality at its core. CRMIT has developed such tools using cutting edge technologies and abstracting industry best practices and R&D.  Self Service Portal  With customers being so meticulous about regular updates and reliable access to their data, administrators just cannot think of walking a thin line. Surviving without a resource that provides a track of customer requirements for services available 24 x 7 can severely affect the productivity. In such a scenario, CRMIT’s Self Service Portal (SSP) is the best solution. This not only tracks the required customer data, but also allows companies to stay in tune with their employees, vendors and stakeholders.   One can directly sign up to become a CRMOD contact and SSP user. One need not use the database, as operations and interactions are d at run time. This is a fully configurable solution that tracks results periodically, thus making it easy for end users. It also offers better security and data visibility that enables users to progress smoothly. Quote and Order Management   When dealing with quotes, contracts and orders becomes complicated, only Quote & Order Management can work as a one-stop solution. CRMIT offers this great tool for managing all this information and for taking care of customer orders and service requirements.  This CRM On Demand plug-in allows one to create a new quote or copy the existing one. Products can be directly added from the product list of CRMOD and the pricing is calculated automatically. Quote can be generated and mailed to the external users in PDF, HTML and XLS formats. This not only allows management of quotes in an enhanced manner, but also supports various billing and tax calculation features that make work effortless.    Report Scheduler  When it comes to analyzing and providing statistics of various business processes currently running in an organization, one cannot depend on manual updates, which sometimes may be inaccurate or even delayed. CRMIT provides a SaaS based powerful solution - Report Scheduler - that allows CRM users to schedule reports as per the frequencies and then receive them as email attachments at the scheduled time.   With this powerful tool, administrators can control the report scheduler for assigning specific reports to specific users. After that, users can login and schedule any assigned report for viewing at particular intervals on monthly, weekly or daily basis. Additionally, users can also copy the mail to external users and can choose the preferred format. The best part is that sharing business data with third party become easy with this and for viewing reports, users need not log into their CRMOD account.  CRM On Demand Offline Solution CRM On-Demand Offline is another great CRM++ extension that allows one to work in both online and offline modes. Synchronizing both the modes is absolutely easy and offers ease while working. CRM OD offline works as an automation tool that not only improves efficiency, but also works as a backup in most cases. It is readily available as a windows application installer and requires users to be online only while validating and synchronizing. The best part is that working in the offline mode also works as a backup. 

    Read the article

  • top Tweets SOA Partner Community – May 2012

    - by JuergenKress
    Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity SOA Community BPMN2.0 Oracle notations poster from eaiesb http://wp.me/p10C8u-pu Torsten WinterbergLook out for new Oracle #BPM edition coming up soon: The Oracle BPM Standard edtion! Great news for easy entry, small licence fees. Yes! Danilo Schmiedel Had a great chat with customer yesterday about #OracleBPM. Next step will be a 5day event combining modeling and implementation @soacommunity Frank Nimphius Still reading "Oracle Business Process Management Suite 11g Handbook". Excellent resource for a non-SOA but ADF guy like me ;-) Oracle New webcast: Maximize #Oracle #WebLogic Server ROI with Oracle #Enterprise #Manager 12c on May 2 at 10 am PT. Register http://bit.ly/JFUrR9 OTNArchBeat@OTNArchBeat BPM in Financial Services Industry | Sanjeev Sharma http://bit.ly/HCCxui JDeveloper & ADF BPEL 11.1.1.6 Certified for Prebuilt E-Business Suite 12.1.3 SOA Integrations http://dlvr.it/1V9SxR Oracle UPK & Tutor Collaborate Attendees: Visit the UPK demo pod, SIGS, and sessions: If you are attending Collaborate 2012 - Sun. http://bit.ly/J39z65 Heidi Buelow see #fmw track RT @demed: Are you going to #KSCOPE12 in San Antonio, June 24-28? http://kscope12.com/component/ seminar/seminarslist?topicsid=6 Use promo code Fusion for discount! Sabine Leitner #SIG #Middleware 15.05. Frankfurt #Oracle #DOAG Planung & Aufbau WebLogic Server #WLS http://bit.ly/HKsCWV @OracleWebLogic @soacommunity SOA Community MDS explorer by Red Samurai http://wp.me/p10C8u-pp Biemond &reg; Retrieve or set a HTTP header from Oracle BPEL: With Oracle SOA Suite 11g patch 12928372 you can finally retrie http://bit.ly/JejTHC Lucas Jellema Call for papers for UKOUG 2012 has opened: http://techandebs.ukoug.org /default.asp?p=9306 (deadline 1st of June) OTNArchBeat BPM API usage: List all BPM Processes for a user | Kavitha Srinivasan http://bit.ly/IJKVfj demed SOA, Cloud + Service Tech symposium (London, Sep 24-25) call for paper is open http://www.servicetechsymposium. com /call2012.php @techsymp #oraclesoa OracleBlogs Lessons learned configuring OER 11g Workflows http://ow.ly/1iMsKh OTNArchBeat Scripting WebLogic Admin Server Startup | Antony Reynolds http://bit.ly/IH5ciU orclateamsoa A-Team Blog #ateam: BPM API usage: List all BPM Processes for a user http://ow.ly/1iJADp Lucas Jellema Just blogged about our Live FMW Application Development show during OBUG 2012, next Tuesday 24th April in Maastricht: OracleBlogs OEG integration with OSB/OWSM - 11g http://ow.ly/1iKx7G SOA Community SOA Community Newsletter April 2012 http://wp.me/p10C8u-pl Frank DorstRT @whitehorsesnl: Whiteblog: BPM Process Spaces in Oracle Webcenter (Patch Set 5(http://bit.ly/Hxzh29) #soacommunity #bpm #oracle) David Shaffer The Advanced SOA suite training class next week in Redwood City is full! Learned a lot about accepting credit card payments. OTNArchBeat Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5 | Shub Lahiri http://bit.ly/IgI8GN SOA Community Oracle Fusion Middleware Innovation Awards 2012, Call for Nominations #ofmaward #soa #bpm #soacommunity OTNArchBeat Updated SOA Documents now available in ITSO Reference Library http://bit.ly/I3Y6Sg Oracle Middleware Data Integrator & SOA - why 2 products better than one for integration? Webcast: Apr 24 10 AM PT http://bit.ly/IzmtKR Andrejus Baranovskis Red Samurai MDS Cleaner V2.0 http://fb.me/FxLVz82w SOA Community “@rluttikhuizen: Chapter 4 of SOA Made Simple book "Classification of Services" ready for collegial review” can #soacommunity get a preview? Xavier Verhaeghe #Gartner figures are out: #Oracle top in App Server market share (43.1%) and Relational #Database, too (48.8%) in 2011 Sabine Leitner WLS12c, Exa*, IDM, EM12c, DB @ Private, Public, Hybrid #Cloud Event 26.04. FFM #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity Michel Schildmeijer@wlscommunity @MiddlewareMagic @OTNArchBeat @Oracle_Fusion Oracle WebLogic / SOA Suite 11g HACMP Cluster take-over http://lnkd.in/G78qMd Oracle Middleware Hear how ODI and SOA's unified approach are key to untangling your business. April 24 10AM PT http://bit.ly/IdcsUz #Oracle OTNArchBeat Using SAP Adapter with OSB 11g (PS3) | Shub Lahiri http://bit.ly/IswR9K SOA Community Integrating with Oracle Fusion Applications: Discovering Integration Artifacts https://blogs.oracle.com/governance /entry/integrating_with_oracle_fusion_ applications #soacommunity #oer #governance OracleBlogs Tuning B2B Server Engine Threads in SOA Suite 11g http://ow.ly/1iH5bx OracleBlogs Top Tweets SOA Partner Community April 2012 http://ow.ly/1iVHfA SOA Community Oracle SOA Suite 11g Database Growth Management http://wp.me/p10C8u-pi Sabine Leitner WLS12c,Exa*,IDM,EM12c, DB @ Private, Public, Hybrid #Cloud Event 24.04. München #Oracle http://bit.ly/zcRuxi @OracleCloudZone @soacommunity SOA Community Testing Business Rules by Mark Nelson http://redstack.wordpress.com/2012/ 04/18/testing-business-rules/ #soacommunity #soa #rules #oracle SOA CommunityTop Tweets SOA Partner Community - April 2012 http://wp.me/p10C8u-pn OTNArchBeat Webcast: Untangle Your Business with Oracle Unified SOA and Data Integration - April 24 http://bit.ly/IQexqT OTNArchBeat"Do more with SOA Integration: Best of Packt" contributors include @gschmutz, @llaszews, many others http://amzn.to/HVWwYt ServiceTechSymposium Symposium agenda page coming together - page launched today with keynotes, sessions to be added shortly. http://www.servicetechsymposium.com /agenda2012.php SOA Community Shipping Specialization plaques - congratulation #Fujitsu - request yours https://soacommunity.wordpress. com/2011/02/23/who-are-the-soa-experts-specialization-recognized-by-customers/ #soacommunity #OPN http://pic.twitter.com/YMRm2ion ServiceTechSymposium call for Presentations Submission Deadline Moved Up to May 21, 2012. Send your presentations submissions ASAP! ServiceTechSymposium Symposium Keynote by Vicente Navarro, European Space Agency, added to agenda: "SOA & Service-Orientation at the European Space Agency" SOA Community Running a large #soa project? Make sure you read - Oracle SOA Suite 11g Database Growth Management #soacommunity #opn SOA Community List all BPM Processes for a user by Yogesh l #bpm #oracle #soacommunity  For regular information on Oracle SOA Suite become a member in the SOA Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) Blog Twitter LinkedIn Mix Forum Technorati Tags: soacommunity, twitter,Oracle,SOA Community,Jürgen Kress,OPN,SOA,BPM

    Read the article

  • F# in ASP.NET, mathematics and testing

    - by DigiMortal
    Starting from Visual Studio 2010 F# is full member of .NET Framework languages family. It is functional language with syntax specific to functional languages but I think it is time for us also notice and study functional languages. In this posting I will show you some examples about cool things other people have done using F#. F# and ASP.NET As I am ASP/ASP.NET MVP I am – of course – interested in how people use different languages and technologies with ASP.NET. C# MVP Tomáš Petrícek writes about developing ASP.NET MVC applications using F#. He also shows how to use LINQ To SQL in F# (using F# PowerPack) and provides sample solution and Visual Studio 2010 template for F# MVC web applications. You may also find interesting how you can create controllers in F#. Excellent work, Tomáš! Vladimir Matveev has interesting example about how to use F# and ApplicationHost class to process ASP.NET requests ouside of IIS. This is simple and very straight-forward example and I strongly suggest you to take a look at it. Very cool example is project Strom in Codeplex. Storm is web services testing tool that is fully written on F#. Take a look at this site because Codeplex offers also source code besides binaries. Math Functional languages are strong in fields like mathematics and physics. When I wrote my C# example about BigInteger class I found out that recursive version of Fibonacci algorithm in C# is not performing well. In same time I made same experiment on F# and in F# there were no performance problems with recursive version. You can find F# version of Fibonacci algorithm from Bob Palmer’s blog posting Fibonacci numbers in F#. Although golden spiral is useful for solving many problems I looked for some practical code example and found one. Kean Walmsley published in his Through the Interface blog very interesting posting Creating Fibonacci spirals in AutoCAD using F#. There are also other cool examples you may be interested in. Using numerical components by Extreme Optimization  it is possible to make some numerical integration (quadrature method) using F# (also C# example is available). fsharp.it introduces factorials calculation on F#. Robert Pickering has made very good work on programming The Game of Life in Silverlight and F# – I definitely suggest you to try out this example as it is very illustrative too. Who wants something more complex may take a look at Newton basin fractal example in F# by Jonathan Birge. Testing After some searching and surfing I found out that there is almost everything available for F# to write tests and test your F# code. FsCheck - FsCheck is a port of Haskell's QuickCheck. Important parts of the manual for using FsCheck is almost literally "adapted" from the QuickCheck manual and paper. Any errors and omissions are entirely my responsibility. FsTest - This project is designed to Language Oriented Programming constructs around unit testing and behavior testing in F#. The goal of this project is to create a Domain Specific Language for testing F# code in a way that makes sense for functional programming. FsUnit - FsUnit makes unit-testing with F# more enjoyable. It adds a special syntax to your favorite .NET testing framework. xUnit.NET - xUnit.net is a developer testing framework, built to support Test Driven Development, with a design goal of extreme simplicity and alignment with framework features. It is compatible with .NET Framework 2.0 and later, and offers several runners: console, GUI, MSBuild, and Visual Studio integration via TestDriven.net, CodeRush Test Runner and Resharper. It also offers test project integration for ASP.NET MVC. Getting started Well, as a first thing you need Visual Studio 2010. Then take a look at these resources: F# samples @ MSDN Microsoft F# Developer Center @ MSDN F# Language Reference @ MSDN F# blog F# forums Real World Functional Programming: With Examples in F# and C# (Amazon) Happy F#-ing! :)

    Read the article

  • Algorithm to Find the Aggregate Mass of "Granola Bar"-Like Structures?

    - by Stuart Robbins
    I'm a planetary science researcher and one project I'm working on is N-body simulations of Saturn's rings. The goal of this particular study is to watch as particles clump together under their own self-gravity and measure the aggregate mass of the clumps versus the mean velocity of all particles in the cell. We're trying to figure out if this can explain some observations made by the Cassini spacecraft during the Saturnian summer solstice when large structures were seen casting shadows on the nearly edge-on rings. Below is a screenshot of what any given timestep looks like. (Each particle is 2 m in diameter and the simulation cell itself is around 700 m across.) The code I'm using already spits out the mean velocity at every timestep. What I need to do is figure out a way to determine the mass of particles in the clumps and NOT the stray particles between them. I know every particle's position, mass, size, etc., but I don't know easily that, say, particles 30,000-40,000 along with 102,000-105,000 make up one strand that to the human eye is obvious. So, the algorithm I need to write would need to be a code with as few user-entered parameters as possible (for replicability and objectivity) that would go through all the particle positions, figure out what particles belong to clumps, and then calculate the mass. It would be great if it could do it for "each" clump/strand as opposed to everything over the cell, but I don't think I actually need it to separate them out. The only thing I was thinking of was doing some sort of N2 distance calculation where I'd calculate the distance between every particle and if, say, the closest 100 particles were within a certain distance, then that particle would be considered part of a cluster. But that seems pretty sloppy and I was hoping that you CS folks and programmers might know of a more elegant solution? Edited with My Solution: What I did was to take a sort of nearest-neighbor / cluster approach and do the quick-n-dirty N2 implementation first. So, take every particle, calculate distance to all other particles, and the threshold for in a cluster or not was whether there were N particles within d distance (two parameters that have to be set a priori, unfortunately, but as was said by some responses/comments, I wasn't going to get away with not having some of those). I then sped it up by not sorting distances but simply doing an order N search and increment a counter for the particles within d, and that sped stuff up by a factor of 6. Then I added a "stupid programmer's tree" (because I know next to nothing about tree codes). I divide up the simulation cell into a set number of grids (best results when grid size ˜7 d) where the main grid lines up with the cell, one grid is offset by half in x and y, and the other two are offset by 1/4 in ±x and ±y. The code then divides particles into the grids, then each particle N only has to have distances calculated to the other particles in that cell. Theoretically, if this were a real tree, I should get order N*log(N) as opposed to N2 speeds. I got somewhere between the two, where for a 50,000-particle sub-set I got a 17x increase in speed, and for a 150,000-particle cell, I got a 38x increase in speed. 12 seconds for the first, 53 seconds for the second, 460 seconds for a 500,000-particle cell. Those are comparable speeds to how long the code takes to run the simulation 1 timestep forward, so that's reasonable at this point. Oh -- and it's fully threaded, so it'll take as many processors as I can throw at it.

    Read the article

  • How to develop RPG Damage Formulas?

    - by user127817
    I'm developing a classical 2d RPG (in a similar vein to final fantasy) and I was wondering if anyone had some advice on how to do damage formulas/links to resources/examples? I'll explain my current setup. Hopefully I'm not overdoing it with this question, and I apologize if my questions is too large/broad My Characters stats are composed of the following: enum Stat { HP = 0, MP = 1, SP = 2, Strength = 3, Vitality = 4, Magic = 5, Spirit = 6, Skill = 7, Speed = 8, //Speed/Agility are the same thing Agility = 8, Evasion = 9, MgEvasion = 10, Accuracy = 11, Luck = 12, }; Vitality is basically defense to physical attacks and spirit is defense to magic attacks. All stats have fixed maximums (9999 for HP, 999 for MP/SP and 255 for the rest). With abilities, the maximums can be increased (99999 for HP, 9999 for HP/SP, 999 for the rest) with typical values (at level 100) before/after abilities+equipment+etc will be 8000/20,000 for HP, 800/2000 for SP/MP, 180/350 for other stats Late game Enemy HP will typically be in the lower millions (with a super boss having the maximum of ~12 million). I was wondering how do people actually develop proper damage formulas that scale correctly? For instance, based on this data, using the damage formulas for Final Fantasy X as a base looked very promising. A full reference here http://www.gamefaqs.com/ps2/197344-final-fantasy-x/faqs/31381 but as a quick example: Str = 127, 'Attack' command used, enemy Def = 34. 1. Physical Damage Calculation: Step 1 ------------------------------------- [{(Stat^3 ÷ 32) + 32} x DmCon ÷16] Step 2 ---------------------------------------- [{(127^3 ÷ 32) + 32} x 16 ÷ 16] Step 3 -------------------------------------- [{(2048383 ÷ 32) + 32} x 16 ÷ 16] Step 4 --------------------------------------------------- [{(64011) + 32} x 1] Step 5 -------------------------------------------------------- [{(64043 x 1)}] Step 6 ---------------------------------------------------- Base Damage = 64043 Step 7 ----------------------------------------- [{(Def - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------ [{(34 - 280.4)^2} ÷ 110] + 16 Step 9 ------------------------------------------------- [(-246)^2) ÷ 110] + 16 Step 10 ---------------------------------------------------- [60516 ÷ 110] + 16 Step 11 ------------------------------------------------------------ [550] + 16 Step 12 ---------------------------------------------------------- DefNum = 566 Step 13 ---------------------------------------------- [BaseDmg * DefNum ÷ 730] Step 14 --------------------------------------------------- [64043 * 566 ÷ 730] Step 15 ------------------------------------------------------ [36248338 ÷ 730] Step 16 ------------------------------------------------- Base Damage 2 = 49655 Step 17 ------------ Base Damage 2 * {730 - (Def * 51 - Def^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ---------------------- 49655 * {730 - (34 * 51 - 34^2 ÷ 11) ÷ 10} ÷ 730 Step 19 ------------------------- 49655 * {730 - (1734 - 1156 ÷ 11) ÷ 10} ÷ 730 Step 20 ------------------------------- 49655 * {730 - (1734 - 105) ÷ 10} ÷ 730 Step 21 ------------------------------------- 49655 * {730 - (1629) ÷ 10} ÷ 730 Step 22 --------------------------------------------- 49655 * {730 - 162} ÷ 730 Step 23 ----------------------------------------------------- 49655 * 568 ÷ 730 Step 24 -------------------------------------------------- Final Damage = 38635 I simply modified the dividers to include the attack rating of weapons and the armor rating of armor. Magic Damage is calculated as follows: Mag = 255, Ultima is used, enemy MDef = 1 Step 1 ----------------------------------- [DmCon * ([Stat^2 ÷ 6] + DmCon) ÷ 4] Step 2 ------------------------------------------ [70 * ([255^2 ÷ 6] + 70) ÷ 4] Step 3 ------------------------------------------ [70 * ([65025 ÷ 6] + 70) ÷ 4] Step 4 ------------------------------------------------ [70 * (10837 + 70) ÷ 4] Step 5 ----------------------------------------------------- [70 * (10907) ÷ 4] Step 6 ------------------------------------ Base Damage = 190872 [cut to 99999] Step 7 ---------------------------------------- [{(MDef - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------- [{(1 - 280.4)^2} ÷ 110] + 16 Step 9 ---------------------------------------------- [{(-279.4)^2} ÷ 110] + 16 Step 10 -------------------------------------------------- [(78064) ÷ 110] + 16 Step 11 ------------------------------------------------------------ [709] + 16 Step 12 --------------------------------------------------------- MDefNum = 725 Step 13 --------------------------------------------- [BaseDmg * MDefNum ÷ 730] Step 14 --------------------------------------------------- [99999 * 725 ÷ 730] Step 15 ------------------------------------------------- Base Damage 2 = 99314 Step 16 ---------- Base Damage 2 * {730 - (MDef * 51 - MDef^2 ÷ 11) ÷ 10} ÷ 730 Step 17 ------------------------ 99314 * {730 - (1 * 51 - 1^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ------------------------------ 99314 * {730 - (51 - 1 ÷ 11) ÷ 10} ÷ 730 Step 19 --------------------------------------- 99314 * {730 - (49) ÷ 10} ÷ 730 Step 20 ----------------------------------------------------- 99314 * 725 ÷ 730 Step 21 -------------------------------------------------- Final Damage = 98633 The problem is that the formulas completely fall apart once stats start going above 255. In particular Defense values over 300 or so start generating really strange behavior. High Strength + Defense stats lead to massive negative values for instance. While I might be able to modify the formulas to work correctly for my use case, it'd probably be easier just to use a completely new formula. How do people actually develop damage formulas? I was considering opening excel and trying to build the formula that way (mapping Attack Stats vs. Defense Stats for instance) but I was wondering if there's an easier way? While I can't convey the full game mechanics of my game here, might someone be able to suggest a good starting place for building a damage formula? Thanks

    Read the article

  • Solving Euler Project Problem Number 1 with Microsoft Axum

    - by Jeff Ferguson
    Note: The code below applies to version 0.3 of Microsoft Axum. If you are not using this version of Axum, then your code may differ from that shown here. I have just solved Problem 1 of Project Euler using Microsoft Axum. The problem statement is as follows: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. My Axum-based solution is as follows: namespace EulerProjectProblem1{ // http://projecteuler.net/index.php?section=problems&id=1 // // If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. // The sum of these multiples is 23. // Find the sum of all the multiples of 3 or 5 below 1000. channel SumOfMultiples { input int Multiple1; input int Multiple2; input int UpperBound; output int Sum; } agent SumOfMultiplesAgent : channel SumOfMultiples { public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; } } agent MainAgent : channel Microsoft.Axum.Application { public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; } }} Let’s take a look at the various parts of the code. I begin by setting up a channel called SumOfMultiples that accepts three inputs and one output. The first two of the three inputs will represent the two possible multiples, which are three and five in this case. The third input will represent the upper bound of the problem scope, which is 1000 in this case. The lone output of the channel represents the sum of all of the matching multiples: channel SumOfMultiples{ input int Multiple1; input int Multiple2; input int UpperBound; output int Sum;} I then set up an agent that uses the channel. The agent, called SumOfMultiplesAgent, received the three inputs from the channel sent to the agent, stores the results in local variables, and performs the for loop that iterates from 1 to the received upper bound. The agent keeps track of the sum in a local variable and stores the sum in the output portion of the channel: agent SumOfMultiplesAgent : channel SumOfMultiples{ public SumOfMultiplesAgent() { int Multiple1 = receive(PrimaryChannel::Multiple1); int Multiple2 = receive(PrimaryChannel::Multiple2); int UpperBound = receive(PrimaryChannel::UpperBound); int Sum = 0; for(int Index = 1; Index < UpperBound; Index++) { if((Index % Multiple1 == 0) || (Index % Multiple2 == 0)) Sum += Index; } PrimaryChannel::Sum <-- Sum; }} The application’s main agent, therefore, simply creates a new SumOfMultiplesAgent in a new domain, prepares the channel with the inputs that we need, and then receives the Sum from the output portion of the channel: agent MainAgent : channel Microsoft.Axum.Application{ public MainAgent() { var SumOfMultiples = SumOfMultiplesAgent.CreateInNewDomain(); SumOfMultiples::Multiple1 <-- 3; SumOfMultiples::Multiple2 <-- 5; SumOfMultiples::UpperBound <-- 1000; var Sum = receive(SumOfMultiples::Sum); System.Console.WriteLine(Sum); System.Console.ReadLine(); PrimaryChannel::ExitCode <-- 0; }} The result of the calculation (which, by the way, is 233,168) is sent to the console using good ol’ Console.WriteLine().

    Read the article

  • Camera Projection back Into 3D world, offset error

    - by Anthony
    I'm using XNA to simulate a robot in a 3D world and then do image analysis on what the camera sees. I have my camera looking down in front of the direction that the robot is going, and I have the robot detecting white pixels. I'm trying to take the white pixels that it finds and project them back into the 3D world so that I can see if it is actually detecting the correct pixels. I almost have it working, but there is an offset between where the white is in in the World and were I put my orange triangles (which represent what the robot things is white). /// <summary> /// Takes a bool map of and makes vertex positions based on the map. /// </summary> /// <param name="c"> The bool map</param> private void ProjectBoolMapOnGroundAnthony2(bool[,] c) { float triangleSize = 0.04f; // Point of interest in World W cordinate system. Vector3 pointOfInterest_W = Vector3.Zero; // Point of interest in Robot Cordinate system R Vector3 pointOfInterest_R = Vector3.Zero; // alpha is the angle from the robot camera to where it is looking in the center. //double alpha = Math.Atan(1.8f / 1); /// Matrix representation of the view determined by the position, target, and updirection. Matrix View = ((SimulationMain)Game).mainRobot.robotCameraView.View; /// Matrix representation of the view determined by the angle of the field of view (Pi/4), aspectRatio, nearest plane visible (1), and farthest plane visible (1200) Matrix Projection = ((SimulationMain)Game).mainRobot.robotCameraView.Projection; /// Matrix representing how the real world cordinates differ from that of the rendering by the camera. Matrix World = ((SimulationMain)Game).mainRobot.robotCameraView.World; Plane groundPlan = new Plane(Vector3.UnitZ, 0.0f); for (int x = 0; x < this.screenWidth; x++) { for (int y = 0; y < this.screenHeight; ) { if (c[x, y] == true && this.count1D < 62000) { int j = 1; Vector3 nearPlanePoint = Game.GraphicsDevice.Viewport.Unproject(new Vector3(x, y, 0), Projection, View, World); Vector3 farPlanePoint = Game.GraphicsDevice.Viewport.Unproject(new Vector3(x, y, 1), Projection, View, World); //Vector3 pointOfInterest_W = Vector3.in Ray ray = new Ray(nearPlanePoint, farPlanePoint); pointOfInterest_W = ray.Position + ray.Direction * (float) ray.Intersects(groundPlan); this.vertexArray2[this.count1D + 0].Position.X = pointOfInterest_W.X - triangleSize; this.vertexArray2[this.count1D + 0].Position.Y = pointOfInterest_W.Y - triangleSize * j; this.vertexArray2[this.count1D + 0].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 0].Color = Color.DarkOrange; // Put another vertex a the position but +1 in the X direction triangleSize //this.vertexArray2[this.count1D + 1].Position.X = pointOnGroud.X + 3; //this.vertexArray2[this.count1D + 1].Position.Y = pointOnGroud.Y + j; this.vertexArray2[this.count1D + 1].Position.X = pointOfInterest_W.X; this.vertexArray2[this.count1D + 1].Position.Y = pointOfInterest_W.Y + triangleSize * j; this.vertexArray2[this.count1D + 1].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 1].Color = Color.Red; // Put another vertex a the position but +1 in the X direction //this.vertexArray2[this.count1D + 0].Position.X = pointOnGroud.X; //this.vertexArray2[this.count1D + 0].Position.Y = pointOnGroud.Y + 3 + j; this.vertexArray2[this.count1D + 2].Position.X = pointOfInterest_W.X + triangleSize; this.vertexArray2[this.count1D + 2].Position.Y = pointOfInterest_W.Y - triangleSize * j; this.vertexArray2[this.count1D + 2].Position.Z = pointOfInterest_W.Z; this.vertexArray2[this.count1D + 2].Color = Color.Orange; this.count1D += 3; y += j; } else { y++; } } } } The world is a grass texture with lines on it. The world plane is normal at (0,0,1). Any ideas on why there is an offset? Any Ideas? Thanks for the help, Anthony G.

    Read the article

  • HSSFS Part 2.1 - Parsing @@VERSION

    - by Most Valuable Yak (Rob Volk)
    For Part 2 of the Handy SQL Server Function Series I decided to tackle parsing useful information from the @@VERSION function, because I am an idiot.  It turns out I was confused about CHARINDEX() vs. PATINDEX() and it pretty much invalidated my original solution.  All is not lost though, this mistake turned out to be informative for me, and hopefully for you. Referring back to the "Version" view in the prelude I started with the following query to extract the version number: SELECT DISTINCT SQLVersion, SUBSTRING(VersionString,PATINDEX('%-%',VersionString)+2, 12) VerNum FROM VERSION I used PATINDEX() to find the first hyphen "-" character in the string, since the version number appears 2 positions after it, and got these results: SQLVersion VerNum ----------- ------------ 2000 8.00.2055 (I 2005 9.00.3080.00 2005 9.00.4053.00 2008 10.50.1600.1 As you can see it was good enough for most of the values, but not for the SQL 2000 @@VERSION.  You'll notice it has only 3 version sections/octets where the others have 4, and the SUBSTRING() grabbed the non-numeric characters after.  To properly parse the version number will require a non-fixed value for the 3rd parameter of SUBSTRING(), which is the number of characters to extract. The best value is the position of the first space to occur after the version number (VN), the trick is to figure out how to find it.  Here's where my confusion about PATINDEX() came about.  The CHARINDEX() function has a handy optional 3rd parameter: CHARINDEX (expression1 ,expression2 [ ,start_location ] ) While PATINDEX(): PATINDEX ('%pattern%',expression ) Does not.  I had expected to use PATINDEX() to start searching for a space AFTER the position of the VN, but it doesn't work that way.  Since there are plenty of spaces before the VN, I thought I'd try PATINDEX() on another character that doesn't appear before, and tried "(": SELECT SQLVersion, SUBSTRING(VersionString,PATINDEX('%-%',VersionString)+2, PATINDEX('%(%',VersionString)) FROM VERSION Unfortunately this messes up the length calculation and yields: SQLVersion VerNum ----------- --------------------------- 2000 8.00.2055 (Intel X86) Dec 16 2008 19:4 2005 9.00.3080.00 (Intel X86) Sep 6 2009 01: 2005 9.00.4053.00 (Intel X86) May 26 2009 14: 2008 10.50.1600.1 (Intel X86) Apr 2008 10.50.1600.1 (X64) Apr 2 20 Yuck.  The problem is that PATINDEX() returns position, and SUBSTRING() needs length, so I have to subtract the VN starting position: SELECT SQLVersion, SUBSTRING(VersionString,PATINDEX('%-%',VersionString)+2, PATINDEX('%(%',VersionString)-PATINDEX('%-%',VersionString)) VerNum FROM VERSION And the results are: SQLVersion VerNum ----------- -------------------------------------------------------- 2000 8.00.2055 (I 2005 9.00.4053.00 (I Msg 537, Level 16, State 2, Line 1 Invalid length parameter passed to the LEFT or SUBSTRING function. Ummmm, whoops.  Turns out SQL Server 2008 R2 includes "(RTM)" before the VN, and that causes the length to turn negative. So now that that blew up, I started to think about matching digit and dot (.) patterns.  Sadly, a quick look at the first set of results will quickly scuttle that idea, since different versions have different digit patterns and lengths. At this point (which took far longer than I wanted) I decided to cut my losses and redo the query using CHARINDEX(), which I'll cover in Part 2.2.  So to do a little post-mortem on this technique: PATINDEX() doesn't have the flexibility to match the digit pattern of the version number; PATINDEX() doesn't have a "start" parameter like CHARINDEX(), that allows us to skip over parts of the string; The SUBSTRING() expression is getting pretty complicated for this relatively simple task! This doesn't mean that PATINDEX() isn't useful, it's just not a good fit for this particular problem.  I'll include a version in the next post that extracts the version number properly. UPDATE: Sorry if you saw the unformatted version of this earlier, I'm on a quest to find blog software that ACTUALLY WORKS.

    Read the article

  • 7 reasons you had to be at JavaOne Latin America 2012

    - by Bruno.Borges
    Yesterday was 12/12/12, and everybody went crazy on Twitter with cool memes like this one. And maybe you are now wondering why I mentioned 7 (seven) on the blog title. Because I want to play numbers? Yes! Today is 7 days after JavaOne Latin America 2012 is over (... and I had to figure out an excuse for taking so long to blog about it...). So unless you were at JavaOne Latin America this year, here are 7 things you missed: OTN Lounge mini-theatreThere was a mini-theatre holding several lightning talks. We had people from SouJava JUG, GoJava JUG, Globalcode, and several other Java gurus and companies running demos, talks, and even more. For example, @drspockbr talked about the ScrumToys project, that demonstrates the power of JSF. Hands On Lab for JAX-RS and WebSocketsOne of the cool things to do during JavaOne is to come to these Hands On labs and really do something using new technologies with the help of experts. This one in particular, was covered by me, Arun Gupta, and Reza Rahman. The HOL had more people than laptops (and we had 48 laptops!) interested on understanding and learning about the new stuff that is coming within Java EE 7. Things like JAX-RS, Server-sent Events and WebSockets. Hey, if you want to try this HOL by yourself, it is available on Github, so go for it! If you have questions, just let me know! Java Community KeynoteThis keynote presented a lot of cool things like startups using Java in their projects, the Duke Awards, SouJava winning the JCP Outstanding Award, the Java Band, and even more! It was really a space where the Java community could present what they are doing and what they want to do. There's a lot of interest on the Adopt-a-JSR program and the Adopt-OpenJDK. There's also an Adopt-a-JavaEE-JSR program! Take a look if you want to participate and Make the Future Java. Java EE (JMS, JAX-RS) sessions from Reza Rahman, the HeavyMetal guyReza is a well know professional and Java EE enthusiast from the communitty who just joined Oracle this year. His sessions were very well attended, perhaps because of a high interest on the new things coming to Java EE 7 like JMS 2.0 and JAX-RS 2.0. If you want to look at what he did at this JavaOne edition, read his blog post. By the way, if you like Java and heavymetal, you should follow him on Twitter as well! :-) Java EE (WebSockets, HTML5) sessions from Arun Gupta, the GlassFish guyIf you don't know Arun Gupta, no worries. You will have time to know about him while you read his Java EE 6 Pocket Guide. Arun has been evangelizing Java EE for a long time, and is now spreading his word about the new upcoming version Java EE 7. He gave one talk about HTML5 Productivity on the Java EE 7 platform, and another one on building web apps with WebSockets. Pretty neat! Arun blogged about JavaOne Latin America as well. Read it here. Java Embedded and JavaFXIf there are two things that are really trending in the Java World right now besides Java EE 7, certainly they are JavaFX and Java Embedded. There were 14 talks covering Java Embedded, from Java Cards to Raspberry.pi, from Java ME to Java on your TV with Ginga-J. The Internet of Things is becoming true, and Java is the only platform today that can connect it all in an standardized and concise way. JavaFX gained a lot of attention too. There were 8 sessions covering what the platform has to offer in terms of Rich User Experience. The JavaFX Scene Builder is an awesome tool to start playing designing an UI, and coding for JavaFX is like coding Swing with 8 hands, one holding your coffee cup. You can achieve a lot, with your two hands (unless, you really have 8 hands, then you can achieve 4 times more :-). If you want to read more about JavaFX, go to Stephen Chin's blog post. GlassFish and Friends Party, 1st edition at JavaOne Lating AmericaThis is probably the thing that I'm most proud. We brought to Brasil the tradition of holding a happy hour for all GlassFish, Java EE friends. This party started almost 7 years ago in San Francisco, and it was about time to bring it to Brazil! The party happened on Tuesday night, right after JavaOne General Keynote, at the Tribeca Pub. We had about 80 attendees and met a lot of Java EE developers there! People from JUGs, Oracle, Locaweb and Red Hat showed up too, including some execs from Oracle that didn't resist and could not miss a party like this one.Lots of caipirinhas, beer and food to everyone, some cool music... even The Fish walking around the party with Juggy!You can see more photos from the party on an album I shared with the recently created GlassFish Brasil community on Google+ here (but you may be more interested in joining the GlassFish english community). There's also more pictures that Arun took and shared on this link. So now you may want to consider coming to Brazil next year! Java EE 7 is on its way, and Brazil is happily and patiently waiting for it, with a lot of enthusiasm. By the way, GlassFish and Java EE 6 just celebrated a Happy Birthday!

    Read the article

  • Ongoing confusion about ivars and properties in objective C

    - by Earl Grey
    After almost 8 months being in ios programming, I am again confused about the right approach. Maybe it is not the language but some OOP principle I am confused about. I don't know.. I was trying C# a few years back. There were fields (private variables, private data in an object), there were getters and setters (methods which exposed something to the world) ,and properties which was THE exposed thing. I liked the elegance of the solution, for example there could be a class that would have a property called DailyRevenue...a float...but there was no private variable called dailyRevenue, there was only a field - an array of single transaction revenues...and the getter for DailyRevenue property calculated the revenue transparently. If somehow the internals of daily revenue calculation would change, it would not affect somebody who consumed my DailyRevenue property in any way, since he would be shielded from getter implementation. I understood that sometimes there was , and sometimes there wasn't a 1-1 relationship between fields and properties. depending on the requirements. It seemed ok in my opinion. And that properties are THE way to acces the data in object. I know the difference betweeen private, protected, and public keyword. Now lets get to objectiveC. On what factor should I base my decision about making someting only an ivar or making it as a property? Is the mental model the same as I describe above? I know that ivars are "protected" by default, not "private" asi in c#..But thats ok I think, no big deal for my presnet level of understanding the whole ios development. The point is ivars are not accesible from outside (given i don't make them public..but i won't). The thing that clouds my clear understanding is that I can have IBOutlets from ivars. Why am I seeing internal object data in the UI? *Why is it ok?* On the other hand, if I make an IBOutlet from property, and I do not make it readonly, anybody can change it. Is this ok too? Let's say I have a ParseManager object. This object would use a built in Foundation framework class called NSXMLParser. Obviously my ParseManager will utilize this nsxmlparser's capabilities but will also do some additional work. Now my question is, who should initialize this NSXMLParser object and in which way should I make a reference to it from the ParseManager object, when there is a need to parse something. A) the ParseManager -1) in its default init method (possible here ivar - or - ivar+ppty) -2) with lazyloading in getter (required a ppty here) B) Some other object - who will pass a reference to NSXMLParser object to the ParseManager object. -1) in some custom initializer (initWithParser:(NSXMLPArser *) parser) when creating the ParseManager object.. A1 - the problem is, we create a parser and waste memory while it is not yet needed. However, we can be sure that all methods that are part ot ParserManager object, can use the ivar safely, since it exists. A2 - the problem is, the nsxmlparser is exposed to outside world, although it could be read only. Would we want a parser to be exposed in some scenario? B1 - this could maybe be useful when we would want to use more types of parsers..i dont know... I understand that architectural requirements and and language is not the same. But clearly the two are in relation. How to get out of that mess of my? Please bear with me, I wasn't able to come up with a single ultimate question. And secondly, it's better to not scare me with some superadvanced newspeak that talks about some crazy internals (what the compiler does) and edge cases.

    Read the article

  • Inappropriate Updates?

    - by Tony Davis
    A recent Simple-talk article by Kathi Kellenberger dissected the fastest SQL solution, submitted by Peter Larsson as part of Phil Factor's SQL Speed Phreak challenge, to the classic "running total" problem. In its analysis of the code, the article re-ignited a heated debate regarding the techniques that should, and should not, be deemed acceptable in your search for fast SQL code. Peter's code for running total calculation uses a variation of a somewhat contentious technique, sometimes referred to as a "quirky update": SET @Subscribers = Subscribers = @Subscribers + PeopleJoined - PeopleLeft This form of the UPDATE statement, @variable = column = expression, is documented and it allows you to set a variable to the value returned by the expression. Microsoft does not guarantee the order in which rows are updated in this technique because, in relational theory, a table doesn’t have a natural order to its rows and the UPDATE statement has no means of specifying the order. Traditionally, in cases where a specific order is requires, such as for running aggregate calculations, programmers who used the technique have relied on the fact that the UPDATE statement, without the WHERE clause, is executed in the order imposed by the clustered index, or in heap order, if there isn’t one. Peter wasn’t satisfied with this, and so used the ingenious device of assuring the order of the UPDATE by the use of an "ordered CTE", based on an underlying temporary staging table (a heap). However, in either case, the ordering is still not guaranteed and, in addition, would be broken under conditions of parallelism, or partitioning. Many argue, with validity, that this reliance on a given order where none can ever be guaranteed is an abuse of basic relational principles, and so is a bad practice; perhaps even irresponsible. More importantly, Microsoft doesn't wish to support the technique and offers no guarantee that it will always work. If you put it into production and it breaks in a later version, you can't file a bug. As such, many believe that the technique should never be tolerated in a production system, under any circumstances. Is this attitude justified? After all, both forms of the technique, using a clustered index to guarantee the order or using an ordered CTE, have been tested rigorously and are proven to be robust; although not guaranteed by Microsoft, the ordering is reliable, provided none of the conditions that are known to break it are violated. In Peter's particular case, the technique is being applied to a temporary table, where the developer has full control of the data ordering, and indexing, and knows that the table will never be subject to parallelism or partitioning. It might be argued that, in such circumstances, the technique is not really "quirky" at all and to ban it from your systems would server no real purpose other than to deprive yourself of a reliable technique that has uses that extend well beyond the running total calculations. Of course, it is doubly important that such a technique, including its unsupported status and the assumptions that underpin its success, is fully and clearly documented, preferably even when posting it online in a competition or forum post. Ultimately, however, this technique has been available to programmers throughout the time Sybase and SQL Server has existed, and so cannot be lightly cast aside, even if one sympathises with Microsoft for the awkwardness of maintaining an archaic way of doing updates. After all, a Table hint could easily be devised that, if specified in the WITH (<Table_Hint_Limited>) clause, could be used to request the database engine to do the update in the conventional order. Then perhaps everyone would be satisfied. Cheers, Tony.

    Read the article

  • Fastest pathfinding for static node matrix

    - by Sean Martin
    I'm programming a route finding routine in VB.NET for an online game I play, and I'm searching for the fastest route finding algorithm for my map type. The game takes place in space, with thousands of solar systems connected by jump gates. The game devs have provided a DB dump containing a list of every system and the systems it can jump to. The map isn't quite a node tree, since some branches can jump to other branches - more of a matrix. What I need is a fast pathfinding algorithm. I have already implemented an A* routine and a Dijkstra's, both find the best path but are too slow for my purposes - a search that considers about 5000 nodes takes over 20 seconds to compute. A similar program on a website can do the same search in less than a second. This website claims to use D*, which I have looked into. That algorithm seems more appropriate for dynamic maps rather than one that does not change - unless I misunderstand it's premise. So is there something faster I can use for a map that is not your typical tile/polygon base? GBFS? Perhaps a DFS? Or have I likely got some problem with my A* - maybe poorly chosen heuristics or movement cost? Currently my movement cost is the length of the jump (the DB dump has solar system coordinates as well), and the heuristic is a quick euclidean calculation from the node to the goal. In case anyone has some optimizations for my A*, here is the routine that consumes about 60% of my processing time, according to my profiler. The coordinateData table contains a list of every system's coordinates, and neighborNode.distance is the distance of the jump. Private Function findDistance(ByVal startSystem As Integer, ByVal endSystem As Integer) As Integer 'hCount += 1 'If hCount Mod 0 = 0 Then 'Return hCache 'End If 'Initialize variables to be filled Dim x1, x2, y1, y2, z1, z2 As Integer 'LINQ queries for solar system data Dim systemFromData = From result In jumpDataDB.coordinateDatas Where result.systemId = startSystem Select result.x, result.y, result.z Dim systemToData = From result In jumpDataDB.coordinateDatas Where result.systemId = endSystem Select result.x, result.y, result.z 'LINQ execute 'Fill variables with solar system data for from and to system For Each solarSystem In systemFromData x1 = (solarSystem.x) y1 = (solarSystem.y) z1 = (solarSystem.z) Next For Each solarSystem In systemToData x2 = (solarSystem.x) y2 = (solarSystem.y) z2 = (solarSystem.z) Next Dim x3 = Math.Abs(x1 - x2) Dim y3 = Math.Abs(y1 - y2) Dim z3 = Math.Abs(z1 - z2) 'Calculate distance and round 'Dim distance = Math.Round(Math.Sqrt(Math.Abs((x1 - x2) ^ 2) + Math.Abs((y1 - y2) ^ 2) + Math.Abs((z1 - z2) ^ 2))) Dim distance = firstConstant * Math.Min(secondConstant * (x3 + y3 + z3), Math.Max(x3, Math.Max(y3, z3))) 'Dim distance = Math.Abs(x1 - x2) + Math.Abs(z1 - z2) + Math.Abs(y1 - y2) 'hCache = distance Return distance End Function And the main loop, the other 30% 'Begin search While openList.Count() != 0 'Set current system and move node to closed currentNode = lowestF() move(currentNode.id) For Each neighborNode In neighborNodes If Not onList(neighborNode.toSystem, 0) Then If Not onList(neighborNode.toSystem, 1) Then Dim newNode As New nodeData() newNode.id = neighborNode.toSystem newNode.parent = currentNode.id newNode.g = currentNode.g + neighborNode.distance newNode.h = findDistance(newNode.id, endSystem) newNode.f = newNode.g + newNode.h newNode.security = neighborNode.security openList.Add(newNode) shortOpenList(OLindex) = newNode.id OLindex += 1 Else Dim proposedG As Integer = currentNode.g + neighborNode.distance If proposedG < gValue(neighborNode.toSystem) Then changeParent(neighborNode.toSystem, currentNode.id, proposedG) End If End If End If Next 'Check to see if done If currentNode.id = endSystem Then Exit While End If End While If clarification is needed on my spaghetti code, I'll try to explain.

    Read the article

  • How to develop RPG Damage Formulas?

    - by user127817
    I'm developing a classical 2d RPG (in a similar vein to final fantasy) and I was wondering if anyone had some advice on how to do damage formulas/links to resources/examples? I'll explain my current setup. Hopefully I'm not overdoing it with this question, and I apologize if my questions is too large/broad My Characters stats are composed of the following: enum Stat { HP = 0, MP = 1, SP = 2, Strength = 3, Vitality = 4, Magic = 5, Spirit = 6, Skill = 7, Speed = 8, //Speed/Agility are the same thing Agility = 8, Evasion = 9, MgEvasion = 10, Accuracy = 11, Luck = 12, }; Vitality is basically defense to physical attacks and spirit is defense to magic attacks. All stats have fixed maximums (9999 for HP, 999 for MP/SP and 255 for the rest). With abilities, the maximums can be increased (99999 for HP, 9999 for HP/SP, 999 for the rest) with typical values (at level 100) before/after abilities+equipment+etc will be 8000/20,000 for HP, 800/2000 for SP/MP, 180/350 for other stats Late game Enemy HP will typically be in the lower millions (with a super boss having the maximum of ~12 million). I was wondering how do people actually develop proper damage formulas that scale correctly? For instance, based on this data, using the damage formulas for Final Fantasy X as a base looked very promising. A full reference here http://www.gamefaqs.com/ps2/197344-final-fantasy-x/faqs/31381 but as a quick example: Str = 127, 'Attack' command used, enemy Def = 34. 1. Physical Damage Calculation: Step 1 ------------------------------------- [{(Stat^3 ÷ 32) + 32} x DmCon ÷16] Step 2 ---------------------------------------- [{(127^3 ÷ 32) + 32} x 16 ÷ 16] Step 3 -------------------------------------- [{(2048383 ÷ 32) + 32} x 16 ÷ 16] Step 4 --------------------------------------------------- [{(64011) + 32} x 1] Step 5 -------------------------------------------------------- [{(64043 x 1)}] Step 6 ---------------------------------------------------- Base Damage = 64043 Step 7 ----------------------------------------- [{(Def - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------ [{(34 - 280.4)^2} ÷ 110] + 16 Step 9 ------------------------------------------------- [(-246)^2) ÷ 110] + 16 Step 10 ---------------------------------------------------- [60516 ÷ 110] + 16 Step 11 ------------------------------------------------------------ [550] + 16 Step 12 ---------------------------------------------------------- DefNum = 566 Step 13 ---------------------------------------------- [BaseDmg * DefNum ÷ 730] Step 14 --------------------------------------------------- [64043 * 566 ÷ 730] Step 15 ------------------------------------------------------ [36248338 ÷ 730] Step 16 ------------------------------------------------- Base Damage 2 = 49655 Step 17 ------------ Base Damage 2 * {730 - (Def * 51 - Def^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ---------------------- 49655 * {730 - (34 * 51 - 34^2 ÷ 11) ÷ 10} ÷ 730 Step 19 ------------------------- 49655 * {730 - (1734 - 1156 ÷ 11) ÷ 10} ÷ 730 Step 20 ------------------------------- 49655 * {730 - (1734 - 105) ÷ 10} ÷ 730 Step 21 ------------------------------------- 49655 * {730 - (1629) ÷ 10} ÷ 730 Step 22 --------------------------------------------- 49655 * {730 - 162} ÷ 730 Step 23 ----------------------------------------------------- 49655 * 568 ÷ 730 Step 24 -------------------------------------------------- Final Damage = 38635 I simply modified the dividers to include the attack rating of weapons and the armor rating of armor. Magic Damage is calculated as follows: Mag = 255, Ultima is used, enemy MDef = 1 Step 1 ----------------------------------- [DmCon * ([Stat^2 ÷ 6] + DmCon) ÷ 4] Step 2 ------------------------------------------ [70 * ([255^2 ÷ 6] + 70) ÷ 4] Step 3 ------------------------------------------ [70 * ([65025 ÷ 6] + 70) ÷ 4] Step 4 ------------------------------------------------ [70 * (10837 + 70) ÷ 4] Step 5 ----------------------------------------------------- [70 * (10907) ÷ 4] Step 6 ------------------------------------ Base Damage = 190872 [cut to 99999] Step 7 ---------------------------------------- [{(MDef - 280.4)^2} ÷ 110] + 16 Step 8 ------------------------------------------- [{(1 - 280.4)^2} ÷ 110] + 16 Step 9 ---------------------------------------------- [{(-279.4)^2} ÷ 110] + 16 Step 10 -------------------------------------------------- [(78064) ÷ 110] + 16 Step 11 ------------------------------------------------------------ [709] + 16 Step 12 --------------------------------------------------------- MDefNum = 725 Step 13 --------------------------------------------- [BaseDmg * MDefNum ÷ 730] Step 14 --------------------------------------------------- [99999 * 725 ÷ 730] Step 15 ------------------------------------------------- Base Damage 2 = 99314 Step 16 ---------- Base Damage 2 * {730 - (MDef * 51 - MDef^2 ÷ 11) ÷ 10} ÷ 730 Step 17 ------------------------ 99314 * {730 - (1 * 51 - 1^2 ÷ 11) ÷ 10} ÷ 730 Step 18 ------------------------------ 99314 * {730 - (51 - 1 ÷ 11) ÷ 10} ÷ 730 Step 19 --------------------------------------- 99314 * {730 - (49) ÷ 10} ÷ 730 Step 20 ----------------------------------------------------- 99314 * 725 ÷ 730 Step 21 -------------------------------------------------- Final Damage = 98633 The problem is that the formulas completely fall apart once stats start going above 255. In particular Defense values over 300 or so start generating really strange behavior. High Strength + Defense stats lead to massive negative values for instance. While I might be able to modify the formulas to work correctly for my use case, it'd probably be easier just to use a completely new formula. How do people actually develop damage formulas? I was considering opening excel and trying to build the formula that way (mapping Attack Stats vs. Defense Stats for instance) but I was wondering if there's an easier way? While I can't convey the full game mechanics of my game here, might someone be able to suggest a good starting place for building a damage formula? Thanks

    Read the article

  • How to create array with unique sprites? in cocos2d iphone

    - by prakash s
    I write the code like this. This displays only one sprite (red colour bubble) with number of times and moving down, but actually I want to display different sprites (different colour bubble) every time and moving down. I also add no of .png images in resource folder of my project. Here I used only 3.png, but I need to display all *.png images (different colour bubbles) in my project but I don't know how to get this. Please help me Thank you. Here is the code: -(void)addTarget { CCSprite *target = [CCSprite spriteWithFile:@"3.png" rect:CGRectMake(0, 0, 256, 256)]; CGSize winSize = [[CCDirector sharedDirector] winSize]; int minY = target.contentSize.height/2; int maxY = winSize.height - target.contentSize.height/2; int rangeY = maxY - minY; int actualY = (arc4random() % rangeY) + minY; // Create the target slightly off-screen along the right edge, // and along a random position along the Y axis as calculated above target.position = ccp(winSize.width + (target.contentSize.width/2), actualY); [self addChild:target]; // Determine speed of the target int minDuration = 4.0; int maxDuration = 12.0; int rangeDuration = maxDuration - minDuration; int actualDuration = (arc4random() % rangeDuration) + minDuration; // Create the actions id actionMove = [CCMoveTo actionWithDuration:actualDuration position:ccp(-target.contentSize.width/2,actualY)]; id actionMoveDone = [CCCallFuncN actionWithTarget:self selector:@selector(spriteMoveFinished:)]; [target runAction:[CCSequence actions:actionMove, actionMoveDone, nil]]; // Add to targets array target.tag = 2; [_targets addObject:target]; } -(void)gameLogic:(ccTime)dt { [self addTarget]; } -(id) init { if( (self=[super initWithColor:ccc4(255,255,255,255)] )) { // Enable touch events self.isTouchEnabled = YES; // Initialize arrays _targets = [[NSMutableArray alloc] init]; _projectiles = [[NSMutableArray alloc] init]; // Get the dimensions of the window for calculation purposes CGSize winSize = [[CCDirector sharedDirector] winSize]; [self schedule:@selector(gameLogic:) interval:1.0]; [self schedule:@selector(update:)]; } return self; } - (void)update:(ccTime)dt { NSMutableArray *projectilesToDelete = [[NSMutableArray alloc] init]; for (CCSprite *projectile in _projectiles) { CGRect projectileRect = CGRectMake(projectile.position.x - (projectile.contentSize.width/2), projectile.position.y - (projectile.contentSize.height/2), projectile.contentSize.width, projectile.contentSize.height); NSMutableArray *targetsToDelete = [[NSMutableArray alloc] init]; for (CCSprite *target in _targets) { CGRect targetRect = CGRectMake(target.position.x - (target.contentSize.width/2), target.position.y - (target.contentSize.height/2), target.contentSize.width, target.contentSize.height); if (CGRectIntersectsRect(projectileRect, targetRect)) { [targetsToDelete addObject:target]; } } for (CCSprite *target in targetsToDelete) { [_targets removeObject:target]; [self removeChild:target cleanup:YES]; _projectilesDestroyed++; if (_projectilesDestroyed > 30) { //GameOverScene *gameOverScene = [GameOverScene node]; // [gameOverScene.layer.label setString:@"You Win!"]; // [[CCDirector sharedDirector] replaceScene:gameOverScene]; } } if (targetsToDelete.count > 0) { [projectilesToDelete addObject:projectile]; } [targetsToDelete release]; } for (CCSprite *projectile in projectilesToDelete) { [_projectiles removeObject:projectile]; [self removeChild:projectile cleanup:YES]; } [projectilesToDelete release]; }

    Read the article

  • Code Contracts: validating arrays and collections

    - by DigiMortal
    Validating collections before using them is very common task when we use built-in generic types for our collections. In this posting I will show you how to validate collections using code contracts. It is cool how much awful looking code you can avoid using code contracts. Failing code Let’s suppose we have method that calculates sum of all invoices in collection. We have class Invoice and one of properties it has is Sum. I don’t introduce here any complex calculations on invoices because we have another problem to solve in this topic. Here is our code. public static decimal CalculateTotal(IList<Invoice> invoices) {     var sum = invoices.Sum(p => p.Sum);     return sum; } This method is very simple but it fails when invoices list contains at least one null. Of course, we can test if invoice is null but having nulls in lists like this is not good idea – it opens green way for different coding bugs in system. Our goal is to react to bugs ASAP at the nearest place they occur. There is one more way how to make our method fail. It happens when invoices is null. I thing it is also one common bugs during development and it even happens in production environments under some conditions that are usually hardly met. Now let’s protect our little calculation method with code contracts. We need two contracts: invoices cannot be null invoices cannot contain any nulls Our first contract is easy but how to write the second one? Solution: Contract.ForAll Preconditions in code are checked using Contract.Ensures method. This method takes boolean value as argument that sais if contract holds or not. There is also method Contract.ForAll that takes collection and predicate that must hold for that collection. Nice thing is ForAll returns boolean. So, we have very simple solution. public static decimal CalculateTotal(IList<Invoice> invoices) {     Contract.Requires(invoices != null);     Contract.Requires(Contract.ForAll<Invoice>(invoices, p => p != null));       var sum = invoices.Sum(p => p.Sum);     return sum; } And here are some lines of code you can use to test the contracts quickly. var invoices = new List<Invoice>(); invoices.Add(new Invoice()); invoices.Add(null); invoices.Add(new Invoice()); //CalculateTotal(null); CalculateTotal(invoices); If your code is covered with unit tests then I suggest you to write tests to check that these contracts hold for every code run. Conclusion Although it seemed at first place that checking all elements in collection may end up with for-loops that does not look so nice we were able to solve our problem nicely. ForAll method of contract class offered us simple mechanism to check collections and it does it smoothly the code-contracts-way. P.S. I suggest you also read devlicio.us blog posting Validating Collections with Code Contracts by Derik Whittaker.

    Read the article

  • How to attach turrets to tiles in a tile based game

    - by Joseph St. Pierre
    I am a flash developer, and I am building a Tower Defense game. The world is being built through tiles, and I have gotten that accomplished easily. I have also gotten level changes and enemy spawning down as well. However, I wish the player to be able to spawn turrets, and have those turrets be on specific tiles, based upon where the player placed it. Here is my code: stop(); colOffset = 50; rowOffset = 50; guns = []; placed = true; dead = 0; spawned = 0; level = 1; interval = 350 / level; amount = level * 20; counter = 0; numCol = 14; numRow = 10; tiles = []; k = 0; create = false; tileName = new Array("road","grass","end", "start"); board = new Array( new Array(1,1,1,1,3,1,1,1,1,1,2,1,1,1), new Array(1,1,1,0,0,1,1,1,1,1,0,1,1,1), new Array(1,1,1,0,1,1,1,1,1,1,0,0,1,1), new Array(1,1,1,0,0,0,1,1,1,1,1,0,1,1), new Array(1,1,1,0,1,0,0,0,1,1,1,0,0,1), new Array(1,1,1,0,1,1,1,0,0,1,1,1,0,1), new Array(1,1,0,0,1,1,1,1,0,1,1,0,0,1), new Array(1,1,0,1,1,1,1,1,0,1,0,0,1,1), new Array(1,1,0,0,0,0,0,0,0,1,0,1,1,1), new Array(1,1,1,1,1,1,1,1,0,0,0,1,1,1) ); buildBoard(); function buildBoard(){ for ( col = 0; col < numCol; col++){ for ( row = 0; row < numRow; row++){ _root.attachMovie("tile", "tile_" + col + "_" + row, _root.getNextHighestDepth()); theTile = eval("tile_" + col + "_" + row); theTile._x = (col * 50); theTile._y = (row * 50); theTile.row = row; theTile.col = col; tileType = board[row][col]; theTile.gotoAndStop(tileName[tileType]); tiles.push(theTile); } } } init(); function init(){ onEnterFrame = function(){ counter += 1; if ( spawned < amount && counter > 50){ min= _root.attachMovie("minion","minion",_root.getNextHighestDepth()); min._x = tile_4_0._x + 25; min._y = tile_4_0._y + 25; min.health = 100; choose = Math.round(Math.random()); if ( choose == 0 ){ min.waypointX = [ tile_4_1._x +25, tile_3_1._x + 25, tile_3_2._x + 25, tile_3_6._x + 25, tile_2_6._x + 25, tile_2_8._x + 25, tile_8_8._x + 25, tile_8_9._x + 25, tile_10_9._x + 25, tile_10_7._x + 25, tile_11_7._x + 25, tile_11_6._x + 25, tile_12_6._x + 25, tile_12_4._x + 25, tile_11_4._x + 25, tile_11_2._x + 25, tile_10_2._x + 25, tile_10_0._x + 25]; min.waypointY = [ tile_4_1._y +25, tile_3_1._y + 25, tile_3_2._y + 25, tile_3_6._y + 25, tile_2_6._y + 25, tile_2_8._y + 25, tile_8_8._y + 25, tile_8_9._y + 25, tile_10_9._y + 25, tile_10_7._y + 25, tile_11_7._y + 25, tile_11_6._y + 25, tile_12_6._y + 25, tile_12_4._y + 25, tile_11_4._y + 25, tile_11_2._y + 25, tile_10_2._y + 25, tile_10_0._y + 25]; } else if ( choose == 1 ){ min.waypointX = [ tile_4_1._x +25, tile_3_1._x + 25, tile_3_2._x + 25, tile_3_3._x + 25, tile_5_3._x + 25, tile_5_4._x + 25, tile_7_4._x + 25, tile_7_5._x + 25, tile_8_5._x + 25, tile_8_8._x + 25, tile_8_9._x + 25, tile_10_9._x + 25, tile_10_7._x + 25, tile_11_7._x + 25, tile_11_6._x + 25, tile_12_6._x + 25, tile_12_4._x + 25, tile_11_4._x + 25, tile_11_2._x + 25, tile_10_2._x + 25, tile_10_0._x + 25 ]; min.waypointY = [ tile_4_1._y +25, tile_3_1._y + 25, tile_3_2._y + 25, tile_3_3._y + 25, tile_5_3._y + 25, tile_5_4._y + 25, tile_7_4._y + 25, tile_7_5._y + 25, tile_8_5._y + 25, tile_8_8._y + 25, tile_8_9._y + 25, tile_10_9._y + 25, tile_10_7._y + 25, tile_11_7._y + 25, tile_11_6._y + 25, tile_12_6._y + 25, tile_12_4._y + 25, tile_11_4._y + 25, tile_11_2._y + 25, tile_10_2._y + 25, tile_10_0._y + 25 ]; } min.i = 0; counter = 0; spawned += 1; min.onEnterFrame = function(){ dx = this.waypointX[this.i] - this._x; dy = this.waypointY[this.i] - this._y; radians = Math.atan2(dy,dx); degrees = radians * 180 / Math.PI; xspeed = Math.cos(radians); yspeed = Math.sin(radians); this._x += xspeed; this._y += yspeed; if( this._x == this.waypointX[this.i] && this._y == this.waypointY[this.i]){ this.i++; } if ( this._x == tile_10_0._x + 25 && this._y == tile_10_0._y + 25){ this.removeMovieClip(); dead += 1; } } } if ( dead >= amount ){ dead = 0; level += 1; amount = level * 20; spawned = 0; } } btnM.onRelease = function(){ create = true; } } game.onEnterFrame = function(){ } It is possible for me however to complete this task, but only once. I am able to make the turret, drag it over to a tile, and have it attach itself to the tile. No problem. The issue is, I cannot do these multiple times. Please Help.

    Read the article

  • Math with Timestamp

    - by Knut Vatsendvik
    table.sql { border-width: 1px; border-spacing: 2px; border-style: dashed; border-color: #0023ff; border-collapse: separate; background-color: white; } table.sql th { border-width: 1px; padding: 1px; border-style: none; border-color: gray; background-color: white; -moz-border-radius: 0px 0px 0px 0px; } table.sql td { border-width: 1px; padding: 3px; border-style: none; border-color: gray; background-color: white; -moz-border-radius: 0px 0px 0px 0px; } .sql-keyword { color: #0000cd; background-color: inherit; } .sql-result { color: #458b74; background-color: inherit; } Got this little SQL quiz from a colleague.  How to add or subtract exactly 1 second from a Timestamp?  Sounded simple enough at first blink, but was a bit trickier than expected. If the data type had been a Date, we knew that we could add or subtract days, minutes or seconds using + or – sysdate + 1 to add one day sysdate - (1 / 24) to subtract one hour sysdate + (1 / 86400) to add one second Would the same arithmetic work with Timestamp as with Date? Let’s test it out with the following query SELECT   systimestamp , systimestamp + (1 / 86400) FROM dual; ---------- 03.05.2010 22.11.50,240887 +02:00 03.05.2010 The first result line shows us the system time down to fractions of seconds. The second result line shows the result as Date (as used for date calculation) meaning now that the granularity is reduced down to a second.   By using the PL/SQL dump() function, we can confirm this with the following query SELECT   dump(systimestamp) , dump(systimestamp + (1 / 86400)) FROM dual; ---------- Typ=188 Len=20: 218,7,5,4,8,53,9,0,200,46,89,20,2,0,5,0,0,0,0,0 Typ=13 Len=8: 218,7,5,4,10,53,10,0 Where typ=13 is a runtime representation for Date. So how can we increase the precision to include fractions of second? After investigating it a bit, we found out that the interval data type INTERVAL DAY TO SECOND could be used with the result of addition or subtraction being a Timestamp. Let’s try again our first query again, now using the interval data type. SELECT systimestamp,    systimestamp + INTERVAL '0 00:00:01.0' DAY TO SECOND(1) FROM dual; ---------- 03.05.2010 22.58.32,723659000 +02:00 03.05.2010 22.58.33,723659000 +02:00 Yes, it worked! To finish the story, here is one example showing how to specify an interval of 2 days, 6 hours, 30 minutes, 4 seconds and 111 thousands of a second. INTERVAL ‘2 6:30:4.111’ DAY TO SECOND(3)

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >