Search Results

Search found 21343 results on 854 pages for 'pass by reference'.

Page 437/854 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • How to keep track of previous scenes and return to them in libgdx

    - by MxyL
    I have three scenes: SceneTitle, SceneMenu, SceneLoad. (The difference between the title scene and the menu scene is that the title scene is what you see when you first turn on the game, and the menu scene is what you can access during the game. During the game, meaning, after you've hit "play!" in the title scene.) I provide the ability to save progress and consequently load a particular game. An issue that I've run into is being able to easily keep track of the previous scene. For example, if you enter the load scene and then decide to change your mind, the game needs to go back to where you were before; this isn't something that can be hardcoded. Now, an easy solution off the top of my head is to simply maintain a scene stack, which basically keeps track of history for me. A simple transaction would be as follows I'm currently in the menu scene, so the top of the stack is SceneMenu I go to the load scene, so the game pushes SceneLoad onto the stack. When I return from the load scene, the game pops SceneLoad off the stack and initializes the scene that's currently at the top, which is SceneMenu I'm coding in Java, so I can't simply pass around Classes as if they were objects, so I've decided implemented as enum for eac scene and put that on the stack and then have my scene managing class go through a list of if conditions to return the appropriate instance of the class. How can I implement my scene stack without having to do too much work maintaining it?

    Read the article

  • Smooth waypoint traversing

    - by TheBroodian
    There are a dozen ways I could word this question, but to keep my thoughts in line, I'm phrasing it in line with my problem at hand. So I'm creating a floating platform that I would like to be able to simply travel from one designated point to another, and then return back to the first, and just pass between the two in a straight line. However, just to make it a little more interesting, I want to add a few rules to the platform. I'm coding it to travel multiples of whole tile values of world data. So if the platform is not stationary, then it will travel at least one whole tile width or tile height. Within one tile length, I would like it to accelerate from a stop to a given max speed. Upon reaching one tile length's distance, I would like it to slow to a stop at given tile coordinate and then repeat the process in reverse. The first two parts aren't too difficult, essentially I'm having trouble with the third part. I would like the platform to stop exactly at a tile coordinate, but being as I'm working with acceleration, it would seem easy to simply begin applying acceleration in the opposite direction to a value storing the platform's current speed once it reaches one tile's length of distance (assuming that the tile is traveling more than one tile-length, but to keep things simple, let's just assume it is)- but then the question is what would the correct value be for acceleration to increment from to produce this effect? How would I find that value?

    Read the article

  • Application Scope v's Static - Not Quite the same

    - by Duncan Mills
    An interesting question came up today which, innocent as it sounded, needed a second or two to consider. What's the difference between storing say a Map of reference information as a Static as opposed to storing the same map as an application scoped variable in JSF?  From the perspective of the web application itself there seems to be no functional difference, in both cases, the information is confined to the current JVM and potentially visible to your app code (note that Application Scope is not magically propagated across a cluster, you would need a separate instance on each VM). To my mind the primary consideration here is a matter of leakage. A static will be (potentially) visible to everything running within the same VM (OK this depends on which class-loader was used but let's keep this simple), and this includes your model code and indeed other web applications running in the same container. An Application Scoped object, in JSF terms, is much more ring-fenced and is only visible to the Web app itself, not other web apps running on the same server and not directly to the business model layer if that is running in the same VM. So given that I'm a big fan of coding applications to say what I mean, then using Application Scope appeals because it explicitly states how I expect the data to be used and a provides a more explicit statement about visibility and indeed dependency as I'd generally explicitly inject it where it is needed.  Alternative viewpoints / thoughts are, as ever, welcomed...

    Read the article

  • design for supporting entities with images

    - by brainydexter
    I have multiple entities like Hotels, Destination Cities etc which can contain images. The way I have my system setup right now is, I think of all the images belonging to this universal set (a table in the DB contains filePaths to all the images). When I have to add an image to an entity, I see if the entity exists in this universal set of images. If it exists, attach the reference to this image, else create a new image. E.g.: class ImageEntityHibernateDAO { public void addImageToEntity(IContainImage entity, String filePath, String title, String altText) { ImageEntity image = this.getImage(filePath); if (image == null) image = new ImageEntity(filePath, title, altText); getSession().beginTransaction(); entity.getImages().add(image); getSession().getTransaction().commit(); } } My question is: Earlier I had to write this code for each entity (and each entity would have a Set collection). So, instead of re-writing the same code, I created the following interface: public interface IContainImage { Set<ImageEntity> getImages(); } Entities which have image collections also implements IContainImage interface. Now, for any entity that needs to support adding Image functionality, all I have to invoke from the DAO looks something like this: // in DestinationDAO::addImageToDestination { imageDao.addImageToEntity(destination, imageFileName, imageTitle, imageAltText); // in HotelDAO::addImageToHotel { imageDao.addImageToEntity(hotel, imageFileName, imageTitle, imageAltText); It'd be great help if someone can provide me some critique on this design ? Are there any serious flaws that I'm not seeing right away ?

    Read the article

  • Ubuntu server 11.04 recognize only 1 core instead of 4

    - by Kreker
    I searched for other questions and googled a lot but I don't find a solution for solving this problem. Ubuntu Server 11.04 64bit installed on Dell Poweredge with Intel Xeon X5450. He only recognize 1 of the 4 cores I have. Tried to modify the GRUB config but didn't work. IN the machine BIOS I didn't find anything useful. CPU root@darwin:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Xeon(R) CPU X5450 @ 3.00GHz stepping : 10 cpu MHz : 2992.180 cache size : 6144 KB physical id : 0 siblings : 1 core id : 0 cpu cores : 1 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave lahf_lm dts tpr_shadow vnmi flexpriority bogomips : 5984.36 clflush size : 64 cache_alignment : 64 address sizes : 38 bits physical, 48 bits virtual power management: GRUB root@darwin:~# cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 #GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=2 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="" GRUB_CMDLINE_LINUX="noapic nolapic" #was with acpi=off # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" Complete dmesg Too long, posted on pastebin http://pastebin.com/bsKPBhzu

    Read the article

  • Java Embedded @ JavaOne

    - by Tori Wieldt
    Developers, tell your manager (or the other half of your developer-entrepreneur self) about this new event being held Wednesday, Oct. 3th and Thursday, Oct. 4th in San Francisco at the Hotel Nikko (during JavaOne).Java Embedded @ JavaOne is designed to provide business and technical decision makers, as well as Java embedded ecosystem partners, a unique occasion to come together and learn about how they can use Java Embedded technologies for new business opportunities. The ideal audience for this event is business and technical decision makers (e.g. System Integrators, CTO, CXO, Chief Architects/Architects, Business Development Managers, Project Managers, Purchasing managers, Technical Leads, Senior Decision Makers, Practice Leads, R&D Heads, and Development Managers/Leads).A call for papers has gone out, but is ONLY for business-focused submissions. Event organizers are looking for best practices, case studies and panel discussions on emerging opportunities in the Java embedded space. Please consider submitting a paper. The deadline for submission is July 18.Attendees of both JavaOne and Oracle Openworld can attend Java Embedded @ JavaOne by purchasing a $100.00 USD upgrade to their full conference pass. Rates for attending Embedded @ JavaOne alone are here.

    Read the article

  • Functional programming compared to OOP with classes

    - by luckysmack
    I have been interested in some of the concepts of functional programming lately. I have used OOP for some time now. I can see how I would build a fairly complex app in OOP. Each object would know how to do things that object does. Or anything it's parents class does as well. So I can simply tell Person().speak() to make the person talk. But how do I do similar things in functional programming? I see how functions are first class items. But that function only does one specific thing. Would I simply have a say() method floating around and call it with an equivalent of Person() argument so I know what kind of thing is saying something? So I can see the simple things, just how would I do the comparable of OOP and objects in functional programming, so I can modularize and organize my code base? For reference, my primary experience with OOP is Python, PHP, and some C#. The languages that I am looking at that have functional features are Scala and Haskell. Though I am leaning towards Scala. Basic Example (Python): Animal(object): def say(self, what): print(what) Dog(Animal): def say(self, what): super().say('dog barks: {0}'.format(what)) Cat(Animal): def say(self, what): super().say('cat meows: {0}'.format(what)) dog = Dog() cat = Cat() dog.say('ruff') cat.say('purr')

    Read the article

  • Is encoding needed in this decryption?

    - by Lijo
    I have a Encryption – Decryption scenario as shown below. //[Clear text ID string as input] -- [(ASCII GetByte) + Encoding] -- [Encrption as byte array] -- [Database column is in VarBinary] -- [Pass byte[] as VarBinary parameter to SP for comparison] //[ID stored as VarBinary in Database] -- [Read as byte array] -- [(Decrypt as byte array) + Encoding + (ASCII Get String)] -- Show as string in the UI My question is in the decryption scenario. After decryption I get a byte array. I am doing an encoding (IBM037) after that. Is it correct? Is there something wrong in the flow shown above? private static byte[] GetEncryptedID(string id) { Interface_Request input = new Interface_Request(); input.RequestText = Encodeto64(id); input.RequestType = Encryption; ProgramInterface inputRequest = new ProgramInterface(); inputRequest.Test_Trial_Request = input; using (KTestService operation = new KTestService()) { return ((operation.KTrialOperation(inputRequest)).Test_Trial_Response.ResponseText); } } private static string GetDecryptedID(byte[] id) { Interface_Request input = new Interface_Request(); input.RequestText = id; input.RequestType = Decryption; ProgramInterface request = new ProgramInterface(); request.Test_Trial_Request = input; using (KTestService operationD = new KTestService()) { ProgramInterface1 response = operationD.KI014Operation(request); byte[] decryptedValue = response.ICSF_AES_Response.ResponseText; Encoding sourceByteFormat = Encoding.GetEncoding("IBM037"); Encoding destinationByteFormat = Encoding.ASCII; //Convert from one byte format to other (IBM to ASCII) byte[] ibmEncodedBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat,decryptedValue); return System.Text.ASCIIEncoding.ASCII.GetString(ibmEncodedBytes); } } private static byte[] EncodeTo64(string toEncode) { byte[] dataInBytes = System.Text.ASCIIEncoding.ASCII.GetBytes(toEncode); Encoding destinationByteFormat = Encoding.GetEncoding("IBM037"); Encoding sourceByteFormat = Encoding.ASCII; //Convert from one byte format to other (ASCII to IBM) byte[] asciiBytes = Encoding.Convert(sourceByteFormat, destinationByteFormat, dataInBytes); return asciiBytes; }

    Read the article

  • Trying to compile from source newest apache with newest openssl

    - by AlexMA
    I need to install apache 2.4.10 using openssl 1.0.1i. I compiled openssl from source with: $ ./config \ --prefix=/opt/openssl-1.0.1i \ --openssldir=/opt/openssl-1.0.1i $ make $ sudo make install and Apache with: ./configure --prefix=/etc/apache2 \ --enable-access_compat=shared \ --enable-actions=shared \ --enable-alias=shared \ --enable-allowmethods=shared \ --enable-auth_basic=shared \ --enable-authn_core=shared \ --enable-authn_file=shared \ --enable-authz_core=shared \ --enable-authz_groupfile=shared \ --enable-authz_host=shared \ --enable-authz_user=shared \ --enable-autoindex=shared \ --enable-dir=shared \ --enable-env=shared \ --enable-headers=shared \ --enable-include=shared \ --enable-log_config=shared \ --enable-mime=shared \ --enable-negotiation=shared \ --enable-proxy=shared \ --enable-proxy_http=shared \ --enable-rewrite=shared \ --enable-setenvif=shared \ --enable-ssl=shared \ --enable-unixd=shared \ --enable-ssl \ --with-ssl=/opt/openssl-1.0.1i \ --enable-ssl-staticlib-deps \ --enable-mods-static=ssl make (would run sudo make install next but I get an error) I'm essentially following the guide here except with newer slightly newer versions. My problem is I get a linker error when I run make for apache: Making all in support make[1]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' make[2]: Entering directory `/home/developer/downloads/httpd-2.4.10/support' /usr/share/apr-1.0/build/libtool --silent --mode=link x86_64-linux-gnu-gcc -std=gnu99 -pthread -L/opt/openssl-1.0.1i/lib -lssl -lcrypto \ -o ab ab.lo /usr/lib/x86_64-linux-gnu/libaprutil-1.la /usr/lib/x86_64-linux-gnu/libapr-1.la -lm /usr/bin/ld: /opt/openssl-1.0.1i/lib/libcrypto.a(dso_dlfcn.o): undefined reference to symbol 'dlclose@@GLIBC_2.2.5' I tried the answer here, but no luck. I would prefer to just use aptitude, but unfortunately the versions I need aren't available yet. If anyone knows how to fix the linker problem (or what I think is a linker problem), or knows of a better way to tell apache to use a newer openssl, it would be greatly appreciated; I've got apache 1.0.1i working otherwise.

    Read the article

  • Tips about how to spread Object Oriented practices

    - by Augusto
    I work for a medium company that has around 250 developers. Unfortunately, lots of them are stuck in a procedural way of thinking and some teams constantly deliver big Transactional Script applications, when in fact the application contains rich logic. They also fail to manage the design dependencies, and end up with services which depend on another large number of services (a clean example of Big Ball of Mud). My question is: Can you suggest how to spread this type of knowledge? I know that the surface of the problem is that these applications have a poor architecture and design. Another issue is that there are some developers who are against writing any kind of test. A few things I'm doing to change this (but I'm either failing or the change is too small are) Running presentations about design principles (SOLID, clean code, etc). Workshops about TDD and BDD. Coaching teams (this includes using sonar, findbugs, jdepend and other tools). IDE & Refactoring talks. A few things I'm thinking to do in the future (but I'm concern that they might not be good) Form a team of OO evangelists, who disseminate an OO way of thinking in differet teams (these people would need to change teams every few months). Running design review sessions, to criticise the design and suggest improvements (even if the improvements are not done because of time constraints, I think this might be useful) . Something I found with the teams I coach, is that as soon as I leave them, they revert back to the old practices. I know I don't spend a lot of time with them, usually just one month. So whatever I'm doing, it doesn't stick. I'm sorry this question is spattered with frustration, but the alterative to write this was to hit my head on the wall until I pass out.

    Read the article

  • What are some good seminar topics that can be used to improve designer&developer communication?

    - by tactoth
    Hello guys the thing I'll tell is what happens in the company I work for but I know it's more like a common issue in software companies. I'm development team leader in a internet service company that provides service that's very similar to dropbox. In our company we have mainly two divisions: the tech division and the designers division, both have their own reporting hierarchy. Designers focus on designing UI and prioritizing features, while developers focus on implement designers' ideas (more like being driven as our big boss has said). Then here comes our issue: the DEV team and DES team communicate very bad. DEV complain DES for these reasons: Too frequent changing of requirements Too complicated interaction (our DEV team has actually learned many HCI principles) Documents for design are incomplete, usually you just get 'design principles' and it's up to DEV to complete design details. When you find design defects, you ask DES team to resolve them, then DES team quickly change the principles and you gonna spend another several weeks because the change is so fundamental. While DES complain DEV for these reasons: Code architecture is not good enough to adapt to changing requirements (Obviously DES knows something about software development) Product design is about principles, not details. DEV fails to realize this. Communication should be quick and should be mainly oral. Trying to make most feature discussion in document for reference is too overloaded and doesn't make sense. As you can see, DEV and DES have different ideas on product design, and encourages very different practice. We have this difference because of the way we work. So our solution is that we should plan some seminars to make each part more aware of the way the other part work. Then my question is, what are some good topics for such seminars? Guessing some people may not think seminars can solve this problem, please also suggest your solution.

    Read the article

  • First Shard for SQL Azure and SQL Server

    - by Herve Roggero
    That's it!!!!! It's ready to go and be tested, abused and improved! It requires .NET 4.0 and uses some cool technologies, like caching (the new System.Runtime.Caching) and the Task Parallel Library (System.Threading.Tasks). With this library you can: Define a shard of 1, 2 or 100 SQL databases (a mix of SQL Server and SQL Azure) Read from the shard in parallel or sequentially, and cache resultsets Update, Delete a record from the shard Insert records quickly in the shard with a round-robin load Reset the cache You can download the source code and a sample application here: http://enzosqlshard.codeplex.com/  Note about the breadcrumbs: I had to add a connection GUID in order for the library to know which database a record came from. The GUID is currently calculated on the fly in the library using some of the parameters of the connection string. The GUID is also dynamically added to the result set so the client can pass it back to the library. I am curious to get your feedback on this approach. ** Correction from my previous post: this is a library for a Horizontal Partition Shard (HPS): tables are split across databases horizontally. So in essence, the tables need to have the same schema across the databases.

    Read the article

  • Object oriented wrapper around a dll

    - by Tom Davies
    So, I'm writing a C# managed wrapper around a native dll. The dll contains several hundred functions. In most cases, the first argument to each function is an opaque handle to a type internal to the dll. So, an obvious starting point for defining some classes in the wrapper would be to define classes corresponding to each of these opaque types, with each instance holding and managing the opaque handle (passed to its constructor) Things are a little awkward when dealing with callbacks from the dll. Naturally, the callback handlers in my wrapper have to be static, but the callbacks arguments invariable contain an opaque handle. In order to get from the static callback back to an object instance, I've created a static dictionary in each class, associating handles with class instances. In the constructor of each class, an entry is put into the dictionary, and this entry is then removed in the Destructors. When I receive a callback, I can then consult the dictionary to retrieve the class instance corresponding to the opaque reference. Are there any obvious flaws to this? Something that seems to be a problem is that the existence static dictionary means that the garbage collector will not act on my class instances that are otherwise unreachable. As they are never garbage collected, they never get removed from the dictionary, so the dictionary grows. It seems I might have to manually dispose of my objects, which is something absolutely would like to avoid. Can anyone suggest a good design that allows me to avoid having to do this?

    Read the article

  • Why the clip space in OpenGL has 4 dimensions?

    - by user827992
    I will use this as a generic reference, but the more i browser online docs and books, the less i understand about this. const float vertexPositions[] = { 0.75f, 0.75f, 0.0f, 1.0f, 0.75f, -0.75f, 0.0f, 1.0f, -0.75f, -0.75f, 0.0f, 1.0f, }; in this online book there is an example about how to draw the first and classic hello world for OpenGL about making a triangle. The vertex structure for the triangle is declared as stated in the code above. The book, as all the other sources about this, stress the point that the Clip Space is a 4D structure that is used to basically decide what will be rasterized and rendered to the screen. Here I have my questions: i can't imagine something in 4D, i don't think that a human can do that, what is a 4D for this Clip space ? the most human-readable doc that i have read speaks about a camera, which is just an abstraction over the clipping concept, and i get that, the problem is, why not using the concept of a camera in the first place which is a more familiar 3D structure? The only problem with the concept of a camera is that you need to define the prospective in other way and so you basically have to add another statement about what kind of camera you wish to have. How i'm supposed to read this 0.75f, 0.75f, 0.0f, 1.0f ? All i get is that they are all float values and i get the meaning of the first 3 values, what does it mean the last one?

    Read the article

  • HUD layer not being added on my scene

    - by Shailesh_ios
    I have a CCScene which already holds my gameLayer and I am trying to add HUD layer on that.But the HUD layer is not getting added in my scene, I can say that because I have set up a CCLabel on HUD layer and when I run my project, I cannot see that label. Here's what I am doing : In my gameLayer: +(id) scene { CCScene *scene = [CCScene node]; GameScreen *layer = [GameScreen node]; [scene addChild: layer]; HUDclass * otherLayer = [HUDclass node]; [scene addChild:otherLayer]; layer.HC = otherLayer;// HC is reference to my HUD layer in @Interface of gameLayer return scene; } And then in my HUD layer I have just added a CCLabelTTF in its init method like this : -(id)init { if ((self = [super init])) { CCLabelTTF * label = [CCLabelTTF labelWithString:@"IN WEAPON CLASS" fontName:@"Arial" fontSize:15]; label.position = ccp(240,160); [self addChild:label]; } return self; } But now when I run my project I dont see that label, What am I doing wrong here ..? Any Ideas.. ? Thanks in advance for your time.

    Read the article

  • Passing Parameters to an ADF Page through the URL - Part 2.

    - by shay.shmeltzer
    I showed before how to pass a parameter on the URL when invoking a taskflow (where the taskflow starts with a method call and then a page). However in some simpler scenarios you don't actually need a full blown taskflow. Instead you can use page level parameters defined for your page in the adfc-config.xml file. So below is a demo of this technique. I'm also taking advantage of this video to show the concept of a view object level service method and how to invoke it from your page. P.S. You might wonder - why not just reference #{param.amount} as the value set for the method parameter? Why do I need to copy it into a viewScope parameter? The advantage of placing the value in the viewScope is that it is available even when the page went through several sumbits. For example if you switch the "partialSumbit" property of the "Next" button to false in the above example - the minute that you press the button to go to the next department - the param.amount value is gone. However the ViewScope is still there as long as you stay on this page.

    Read the article

  • Why does working processors harder use more electrical power?

    - by GazTheDestroyer
    Back in the mists of time when I started coding, at least as far as I'm aware, processors all used a fixed amount of power. There was no such thing as a processor being "idle". These days there are all sorts of technologies for reducing power usage when the processor is not very busy, mostly by dynamically reducing the clock rate. My question is why does running at a lower clock rate use less power? My mental picture of a processor is of a reference voltage (say 5V) representing a binary 1, and 0V representing 0. Therefore I tend to think of of a constant 5V being applied across the entire chip, with the various logic gates disconnecting this voltage when "off", meaning a constant amount of power is being used. The rate at which these gates are turned on and off seems to have no relation to the power used. I have no doubt this is a hopelessly naive picture, but I am no electrical engineer. Can someone explain what's really going on with frequency scaling, and how it saves power. Are there any other ways that a processor uses more or less power depending on state? eg Does it use more power if more gates are open? How are mobile / low power processors different from their desktop cousins? Are they just simpler (less transistors?), or is there some other fundamental design difference?

    Read the article

  • The tale of how the PowerShell CmdLets got installed with Azure SDK 1.4

    - by Enrique Lima
    I installed the Azure SDK 1.4 while rebuilding my laptop and ran the installation for the Windows Azure Service Management PowerShell CmdLets. Kicked off the installation script for the WASM PowerShell CmdLets by locating the path to which WASM PowerShell CmdLets was deployed to. Double clicked the startHere command. It will then open the WASM installation dialog. Click Next. Click Next. Notice the red x next to the Azure SDK 1.3, the problem is I have SDK 1.4 Here is the workaround, I go back to the location of the deployed WASM sources. Go into the setup path, then scripts>dependencies>check. Now, locate the CheckAzureSDK.ps1 file, and right-click, then edit. This is the content in the ps1 file, it check for the specific version of the Azure SDK, in this case, it is looking for version 1.3.11133.0038. We need for it to check for version 1.4.20227.1419 Now, save your ps1 file, go back to the open WASM install dialog, and click rescan. This time it should pass, then click next. A Command prompt window will appear, click any key. This completes the installation, click Close.

    Read the article

  • BI Publisher : Formatting Issues

    - by Manoj Madhusoodanan
    While creating BI Publisher reports the formatting issues are quite common.Here I am discussing some common issues related to BIP report development. 1) First issue is related to column formatting.When you want to display some data which has leading zeros or trailing zeros after '.' in EXCEL output you will not get the desired output.But in PDF it will come as what you are expecting.This is not with the issue of your data. This is due to the unique nature of EXCEL cell format.When you are trying to put a text data in a cell with out making any change to cell format it will treat as number and it will truncate all leading zeros and all trailing zeros after '.' . So what you have to do is to convert that data into a format which EXCEL can treat as text. Eg: If you want to display 0020100 convert this data into ="0020100". Same way for 23789.02300 to ="23789.02300".   Note: This is applicable to EXCEL output only.If you have multiple output type apply it only for EXCEL. 2) Second is related to report size issue in PDF output type.If the number of columns are more and if you want to show most of the columns in one row andif it is a PDF output you can choose the paper size as Legal (8.5 x 14''). You will get more spaces in the template to accommodate more columns. 3) If your XML data contains special characters like &,<,> etc ..  pass the data to DBMS_XMLGEN.CONVERT function.It will replace special characters with corresponding XML notations. Eg: (a>b) & (c!=d) to  (a&gt;b) &amp; (c!=d)

    Read the article

  • I've been told that Exceptions should only be used in exceptional cases. How do I know if my case is exceptional?

    - by tieTYT
    My specific case here is that the user can pass in a string into the application, the application parses it and assigns it to structured objects. Sometimes the user may type in something invalid. For example, their input may describe a person but they may say their age is "apple". Correct behavior in that case is roll back the transaction and to tell the user an error occurred and they'll have to try again. There may be a requirement to report on every error we can find in the input, not just the first. In this case, I argued we should throw an exception. He disagreed, saying, "Exceptions should be exceptional: It's expected that the user may input invalid data, so this isn't an exceptional case" I didn't really know how to argue that point, because by definition of the word, he seems to be right. But, it's my understanding that this is why Exceptions were invented in the first place. It used to be you had to inspect the result to see if an error occurred. If you failed to check, bad things could happen without you noticing. Without exceptions every level of the stack needs to check the result of the methods they call and if a programmer forgets to check in one of these levels, the code could accidentally proceed and save invalid data (for example). Seems more error prone that way. Anyway, feel free to correct anything I've said here. My main question is if someone says Exceptions should be exceptional, how do I know if my case is exceptional?

    Read the article

  • Be Prepared: Technology Trends Converge and Disrupt

    - by Richard Lefebvre
    Cloud. Big data. Mobile. Social media: these mega trends in technology have had a profound impact on our lives. And now according to SVP Ravi Puri from North America Oracle Consulting Services, these trends are starting to converge and will affect us even more. His article, “Cloud, Analytics, Mobile, And Social: Convergence Will Bring Even More Disruption” appeared in Forbes on June 6. For example, mobile and social are causing huge changes in the business world. Big data and cloud are coming together to help us with deep analytical insights. And much more. These convergences are causing another wave of disruption, which can drive all kinds of improvements in such things as customer satisfaction, competitive advantage, and growth. But, according to Puri, companies need to be prepared. In this article, Puri urges companies to get out in front of the new innovations. H3 gives good directions on how to do so to accelerate time to value and minimize risk. The post is a good thought leadership piece to pass on to your customers.

    Read the article

  • Cocos2d v2.0 and OpenGL 2.0/1.0: where to start

    - by mm24
    I started developing my very first game 3 months ago using Cocos2d 2.0 for iPhone. I am now in the stage where I'd like to add some cool effects to the bullets and some special weapons (see my waveforms question here). I got a good answer in the cocos2d-iphone forum (see this one). Unfortunately I am a bit paralized now. I don't know if I will be overdoing by learning OpengGL 2.0 or if I should just stick ot the old 1.0. There is a good intro on various tutorial's written in Steffen Itterheims blog (see this post). I would like to add to my game: a blur effect to the bullets (here is a tutorial for OpenGL 1.0) a waveform (see above) some realistic water ripples (here is a nice sample code) So now, given that I don't want to overdo things but at the same time I want to achieve those effects, from where should I start? Should I discard the OpenGL 1.0 tutorials? OR should I use only OpenGL 1.0 code? How can I avoid confusion? I mean, it seems that the compiler recognizes both, but that there are some conflictual calls in some circumnstances, I am fairly sure this has some explanation, is there some reference to this somewhere?

    Read the article

  • Best Method of function parameter validation

    - by Aglystas
    I've been dabbling with the idea of creating my own CMS for the experience and because it would be fun to run my website off my own code base. One of the decisions I keep coming back to is how best to validate incoming parameters for functions. This is mostly in reference to simple data types since object validation would be quite a bit more complex. At first I debated creating a naming convention that would contain information about what the parameters should be, (int, string, bool, etc) then I also figured I could create options to validate against. But then in every function I still need to run some sort of parameter validation that parses the parameter name to determine what the value can be then validate against it, granted this would be handled by passing the list of parameters to function but that still needs to happen and one of my goals is to remove the parameter validation from the function itself so that you can only have the actual function code that accomplishes the intended task without the additional code for validation. Is there any good way of handling this, or is it so low level that typically parameter validation is just done at the start of the function call anyway, so I should stick with doing that.

    Read the article

  • SQL Saturday #220 Atlanta May 2013!

    - by Most Valuable Yak (Rob Volk)
    If you love SQL Server training and are near the Atlanta area, or just love us so much you're willing to travel here, please come join us for: SQL SATURDAY #220! The main event is Saturday, May 18.  The event is free, with a $10.00 lunch fee.  The main page has more details here: http://www.sqlsaturday.com/220/eventhome.aspx We are also offering pre-conference sessions on Friday, May 17, by 5 world-renowned presenters: Denny Cherry: SQL Server Security Register! Site Twitter Adam Machanic: Surfing the Multicore Wave: Processors, Parallelism, and Performance Register! Site Twitter Stacia Misner: Languages of BI Register! Site Twitter Bill Pearson: Practical Self-Service BI with PowerPivot for Excel Register! Site Twitter Eddie Wuerch: The DBA Skills Upgrade Toolkit Register! Site Twitter         We have an early bird registration price of $119 until noon EST Friday, March 22.  After that the price goes to $149, a STEAL when you compare it to the PASS Summit price. :) Please click on the links to register and for more information.  You can also follow the hash tag #SQLSatATL on Twitter for more news about this event. Can't wait to see you all there!

    Read the article

  • how to properly implement alpha blending in a complex 3d scene

    - by Gajet
    I know this question might sound a bit easy to answer but It's driving me crazy. There are too many possible situations that a good alpha blending mechanism should handle, and for each Algorithm I can think of there is something missing. these are the methods I've though about so far: first of I though about object sorting by depth, this one simply fails because Objects are not simple shapes, they might have curves and might loop inside each other. so I can't always tell which one is closer to camera. then I thought about sorting triangles but this one also might fail, thought I'm not sure how to implement it there is a rare case that might again cause problem, in which two triangle pass through each other. again no one can tell which one is nearer. the next thing was using depth buffer, at least the main reason we have depth buffer is because of the problems with sorting that I mentioned but now we get another problem. Since objects might be transparent, in a single pixel there might be more than one object visible. So for which Object should I store pixel depth? I then thought maybe I can only store the most front Object depth, and using that determine how should I blend next draw calls at that pixel. But again there was a problem, think about 2 semi transparent planes with a solid plane in middle of them. I was going to render the solid plane at the end, one can see the most distant plane. note that I was going to merge every two planes until there is only one color left for that pixel. Obviously I can use sorting methods too because of the same reasons I've explained above. Finally the only thing I imagine being able to work is to render all objects into different render targets and then sort those layers and display the final output. But this time I don't know how can I implement this algorithm.

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >