Search Results

Search found 2857 results on 115 pages for 'race condition'.

Page 65/115 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Formatting php, what works more efficiently?

    - by JamesM-SiteGen
    Hello fellow programmers, I was just wondering what makes php work faster, I have a few methods that I always go and do, but that only improves the way I can read it, but how about the interpreter? Should I include the curly braces when there is only one statement to run? if(...){ echo "test"; } # Or.. if(...) echo "test"; === Which should be used? I have also found http://beta.phpformatter.com/ and I find the following settings to be good, but are they? Indentation: Indentation style: {K&R (One true brace style)} Indent with: {Tabs} Starting indentation: [1] Indentation: [1] Common: [x] Remove all comments [x] Remove empty lines [x] Align assignments statements nicely [ ] Put a comment with the condition after if, while, for, foreach, declare and catch statements Improvement: [x] Remove lines with just a semicolon (;) [x] Make normal comments (//) from perl comments (#) [x] Make long opening tag (<?php) from short one (<?) Brackets: [x] Space inside brackets- ( ) [x] Space inside empty brackets- ( ) [x] Space inside block brackets- [ ] [x] Space inside empty block brackets- [ ] Tiny var names: often I go through my code and change $var1 to $a, $var2 to $b and so on. I do include comments at the start of the file to show to me what each letter(s) mean.. Final note: So am I doing the right thing with the curly braces and the settings? Are there any great tips that help it run faster?

    Read the article

  • Two Candidates + One Job = Two Different Outcomes

    - by david.talamelli
    Recruiters have always headhunted (sidenote: I do not like this word, in general I think the type of people who use the phrase “headhunting” are the ones who are trying to sound more important than what they likely are). Any serious Recruiter engages in direct recruiting activity, it is part and parcel of the business it is not something unique. With the uptake in Social Media the past 4-5 years, we have seen an increase in the number of Recruiters proactively reaching out to people about job opportunities. We have also seen this activity increase across all levels of hire, from help desk roles to C-Level Executives. While getting approached about a role can be a nice boost to a person’s ego, do not let it give you an inflated sense of entitlement. It is The way that people handle themselves during these calls and subsequent interviews will have a large impact on their potential to land that job. Last week I spoke to two very different candidates, both about the same position and both with very different outcomes. On paper, Candidate #1 looked fantastic; they ticked many of the boxes that we were looking for. The person is working at global IT company and working in a similar role as the one we were hiring for but not in as senior as the role we had. This role would have been the perfect step to getting involved in more complex work for the person. Candidate #2 had less polished IT experience, ticked some of the boxes we were looking for and on paper in comparison to Candidate #1 was not as close a fit as Candidate #1 was. It seemed like I was comparing apples and oranges. After speaking to both candidates it turns out I was comparing apples and oranges except the person better suited for our role was not the one I was expecting it would be. The first candidate on paper looked great – they had the experience we were looking for and appeared to be just right for the role, but after talking to them, they gave me the impression that they thought the world owed them. The impression I was left with was that they did not equate success with hard work, they seemed more interested in “what is in it for me”. Rather than having a proper conversation with me, I was often cut off and asked to hurry it up when explaining our business, what we are doing, etc... . This person seemed more interested in the job title and money than how rather than think about ways to make the role successful. Candidate #2 who had limited experience, made up for any perceived lack of experience and them some with a demonstrated motivation to succeed and do the things needed to make that happen. Candidate #2 made a great first impression, they did not seem afraid of hard work and demonstrated a “team player” attitude. In talking to them they kept me engaged, listened and asked thoughtful questions that made me think this is the type of person who creates their own luck and who would thrive in a place like Oracle. Skills, capabilities, experience and a good resume can certainly get your foot in the door, but the wrong attitude or approach to work can close those opportunities just as easily. On the other hand, hard work, effort and a genuine work ethic may help open those doors that would otherwise closed for you. A resume with all the credentials gets you in the front door but that is just the beginning of the process. It is not how we start the race that is important, it’s how things end that matter most.

    Read the article

  • Little mysterious RowMatch

    - by kishore.kondepudi(at)oracle.com
    Incidentally this was the first piece of code i ever wrote in ADF.The requirement was we have tax rates which are read from a table.And there can be different type of tax rates called certificates or exceptions based on the rate_type column in the tax rates table.The simplest design i chose was to create an EO on the tax rates table and create two VO's called CertificateVO and ExceptionVO based on the same EO.So far so good.I wrote all the business logic in the EO and completed the model project.The CertificateVO has the query as select * from tax_rates TaxRateEO where rate_type='CERTIFICATE' and similary the ExceptionVO is also built.The UI is pretty simple and it has two tabs called Certificates and Exceptions and each table has a button to create a tax rate.The certificate tab is driven by CertificateVO and exception tab is driven by ExceptionVO.The CertificateVO has default value of rate_type set to 'CERTIFICATE' and ExceptionVO has default value of rate_type to 'EXCEPTION' to default values for new records.So far so good.But on running the UI i noticed a strange thing,When i create a new row in Certificate i see the same row in Exception too and vice-versa.i.e; what ever row i create in one VO it also appears in the second one although it shouldn't be.I couldn't understand the reason for behavior even though an explicit where clause is present.Digging through documentation i found that ADF doesnt apply the where clause to new rows instead it applies something called as RowMatch to them.RowMatch in simple terms is a where condition applied to the VO rows at runtime.Since we had both VO's based on the same EO we have the same entity cache.The filter factor for new rows to be shown in VO at runtime is actually RowMatch than the where clause defined in the VO.The default RowMatch is empty as a result any new row appears in both the VO's since its from same entity cache.The solution to this problem is to use polymorphic view objects which can do the row filter based on configuration or override the getRowMatch() method in the VOImpl and pass the custom where filter instead of default RowMatch.Eg:@Overridepublic RowMatch getRowMatch(){    return new RowMatch("rate_type='CERTIFICATE'");}similarly for ExceptionVO too.With proper RowMatch in place new rows will route themselves to appropriate VO.PS: The behavior(Same row pushed to both VO's from entity cache) is also called as ViewLink Consistency.Try it out!

    Read the article

  • IE9 and the Mystery of the Broken Video Tag

    - by David Wesst
    I was very excited when Microsoft released the Internet Explorer 9 Release Candidate. As far as I was concerned, this was another nail in the coffin for IE6 and step in the right direction for us .NET web developers as our base camp was finally starting to support the latest and greatest future-web standards. Unfortunately, my celebration was short lived as I soon hit a snag while loading up an HTML5 site I was building in Visual Studio 2010. The Mystery After updating Internet Explorer, I ran my HTML5 site that had the oh-so-lovely HTML5 video tag showing a video. Even though this worked in IE9 Beta, it appeared that IE9 RC could not load the same file. I figured that it was the video codec. Maybe IE9 RC no longer supported the video codec I used to encode my video. Here's the code I used: <video width="854" height="480" id="myOtherVideo" autoplay="" controls=""> <source src="/DemoSite1/Media/big_buck_bunny.mp4"/> <div> <p>Your browser does not support HTML5 Video.</p> </div> </video> As you can see from the code, I had the "fail-safe" code inside the video tag. The idea there being that if the video tag, or the video files themselves, are not supported by the browser my video should fail gracefully. What was even more strange was the fact that it worked in all the other HTML5 browsers that supported video. The Investigation Whoa! DJ stop the music. How can any of that make sense? Would the IE team really take such huge strides forward only to forget to include a feature that was already in the beta? I don't think so. I did plenty of searching on the web and asking around on the web, but could not seem to find anyone else having the same problem. Eventually I came across this post talking about declaring the MIME type in the .htaccess file. That got me thinking: does my web server support the video MIME type? I was using VS2010, so how do I know what kind of MIME types are supported by default? Still, my page hosted in Cassini (the web development server in VS2010) works on the other browsers. Why wouldn't it work with IE9 RC? To answer that, it was time to open up the upgraded toolbox known as the Developer's Tools in IE9 and use the new Network Tab. The Conclusion If you take a closer look at the results displayed from the Network tab, you can see that IE9 RC has interpreted the video file as text/html rather than video/mp4. To make this work, I decided to use IIS to debug my HTML5 web application by setting the web project's properties. Then, I added the MIME types that I want to support (i.e. video/mp4, video/ogg, video/webm). Et voila! The Mystery of the Broken Video Tag is solved. After Thoughts After solving the mystery, I still had the question about why my site worked in Chrome, Safari, and Firefox 3.6. After asking around, the best answer that I received was from my colleague Tyler Doerksen. He said that IE9 likely depends on the server telling it what kind of file it is downloading rather than trying to read the metadata about the data it is trying to download before doing anything. I have no facts to back this up, but it makes sense to me. In a browser war where milliseconds can make your browser fall back a few places in the race for supremacy, maybe the IE team opted to depend on the server knowing what kind of content it is serving up. Makes sense to me. In any case, that is just an educated guess. If you have any comments, feel free to post on them below. This post also appears at http://david.wes.st

    Read the article

  • ORA-600 Troubleshooting

    - by [email protected]
    Have you observed an ORA-0600 or ORA-07445 reported in your alert log? The ORA-600 error is the generic internal error number for Oracle program exceptions. It indicates that a process has encountered a low-level, unexpected condition. The ORA-600 error statement includes a list of arguments in square brackets: ORA 600 "internal error code, arguments: [%s], [%s],[%s], [%s], [%s]" The first argument is the internal message number or character string. This argument and the database version number are critical in identifying the root cause and the potential impact to your system.  The remaining arguments in the ORA-600 error text are used to supply further information (e.g. values of internal variables etc).   Looking for the best way to diagnose? There is an ORA-600 Troubleshooter Tool available in My Oracle Support.  This tool will lead you to applicable content in My Oracle Support on the problem and can be used to investigate the problem with argument data from the error message or you can pull out the first 10 or 15 stack pointers from the associated trace file to match up against known bugs. Note 153788.1 ORA-600/ORA-7445 TroubleshooterNote 1082674.1 A Video To Demonstrate The Usage Of The ORA-600/ORA-7445 Lookup Tool [Video] Also, take a quick look at the Master Note for Diagnosing ORA-600 ( MasterNoteORA600.docx) for some tips on diagnosing.

    Read the article

  • Accessing the JSESSIONID from JSF

    - by Frank Nimphius
    The following code attempts to access and print the user session ID from ADF Faces, using the session cookie that is automatically set by the server and the Http Session object itself. FacesContext fctx = FacesContext.getCurrentInstance(); ExternalContext ectx = fctx.getExternalContext(); HttpSession session = (HttpSession) ectx.getSession(false); String sessionId = session.getId(); System.out.println("Session Id = "+ sessionId); Cookie[] cookies = ((HttpServletRequest)ectx.getRequest()).getCookies(); //reset session string sessionId = null; if (cookies != null) { for (Cookie brezel : cookies) {     if (brezel.getName().equalsIgnoreCase("JSESSIONID")) {        sessionId = brezel.getValue();        break;      }   } } System.out.println("JSESSIONID cookie = "+sessionId); Though apparently both approaches to the same thing, they are different in the value they return and the condition under which they work. The getId method, for example returns a session value as shown below grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692!1322120041091 Reading the cookie, returns a value like this grLFTNzJhhnQTqVwxHMGl0WDZPGhZFl2m0JS5SyYVmZqvrfghFxy!-1834097692 Though both seem to be identical, the difference is within "!1322120041091" added to the id when reading it directly from the Http Session object. Dependent on the use case the session Id is looked up for, the difference may not be important. Another difference however, is of importance. The cookie reading only works if the session Id is added as a cookie to the request, which is configurable for applications in the weblogic-application.xml file. If cookies are disabled, then the server adds the session ID to the request URL (actually it appends it to the end of the URI, so right after the view Id reference). In this case however no cookie is set so that the lookup returns empty. In both cases however, the getId variant works.

    Read the article

  • program logic of printing the prime numbers

    - by Vignesh Vicky
    can any body help to understand this java program it just print prime n.o ,as you enter how many you want and it works good class PrimeNumbers { public static void main(String args[]) { int n, status = 1, num = 3; Scanner in = new Scanner(System.in); System.out.println("Enter the number of prime numbers you want"); n = in.nextInt(); if (n >= 1) { System.out.println("First "+n+" prime numbers are :-"); System.out.println(2); } for ( int count = 2 ; count <=n ; ) { for ( int j = 2 ; j <= Math.sqrt(num) ; j++ ) { if ( num%j == 0 ) { status = 0; break; } } if ( status != 0 ) { System.out.println(num); count++; } status = 1; num++; } } } i dont understand this for loop condition for ( int j = 2 ; j <= Math.sqrt(num) ; j++ ) why we are taking sqrt of num...which is 3....why we assumed it as 3?

    Read the article

  • How lookaheads are propagated in "channel" method of building LALR parser?

    - by greenoldman
    The method is described in Dragon Book, however I read about it in ""Parsing Techniques" by D.Grune and C.J.H.Jacobs". I start from my understanding of building channels for NFA: channels are built once, they are like water channels with current you "drop" lookahead symbols in right places (sources) of the channel, and they propagate with "current" when symbol propagates, there are no barriers (the only sufficient things for propagation are presence of channel and direction/current); i.e. lookahead cannot just die out of the blue Is that right? If I am correct, then eof lookahead should be present in all states, because the source of it is the start production, and all other production states are reachable from start state. How the DFA is made out of this NFA is not perfectly clear for me -- the authors of the mentioned book write about preserving channels, but I see no purpose, if you propagated lookaheads. If the channels have to be preserved, are they cut off from the source if the DFA state does not include source NFA state? I assume no -- the channels still runs between DFA states, not only within given DFA state. In the effect eof should still be present in all items in all states. But when you take a look at DFA presented in book (pdf is from errata): DFA for LALR (fig. 9.34 in the book, p.301) you will see there are items without eof in lookahead. The grammar for this DFA is: S -> E E -> E - T E -> T T -> ( E ) T -> n So how it was computed, when eof was dropped, and on what condition? Update It is textual pdf, so two interesting states (in DFA; # is eof): State 1: S--- >•E[#] E--- >•E-T[#-] E--- >•T[#-] T--- >•n[#-] T--- >•(E)[#-] State 6: T--- >(•E)[#-)] E--- >•E-T[-)] E--- >•T[-)] T--- >•n[-)] T--- >•(E)[-)] Arc from 1 to 6 is labeled (.

    Read the article

  • SQL SERVER – Get Schema Name from Object ID using OBJECT_SCHEMA_NAME

    - by pinaldave
    Sometime a simple solution have even simpler solutions but we often do not practice it as we do not see value in it or find it useful. Well, today’s blog post is also about something which I have seen not practiced much in codes. We are so much comfortable with alternative usage that we do not feel like switching how we query the data. I was going over forums and I noticed that at one place user has used following code to get Schema Name from ObjectID. USE AdventureWorks2012 GO SELECT s.name AS SchemaName, t.name AS TableName, s.schema_id, t.OBJECT_ID FROM sys.Tables t INNER JOIN sys.schemas s ON s.schema_id = t.schema_id WHERE t.name = OBJECT_NAME(46623209) GO Before I continue let me say I do not see anything wrong with this script. It is just fine and one of the way to get SchemaName from Object ID. However, I have been using function OBJECT_SCHEMA_NAME to get the schema name. If I have to write the same code from the beginning I would have written the same code as following. SELECT OBJECT_SCHEMA_NAME(46623209) AS SchemaName, t.name AS TableName, t.schema_id, t.OBJECT_ID FROM sys.tables t WHERE t.name = OBJECT_NAME(46623209) GO Now, both of the above code give you exact same result. If you remove the WHERE condition it will give you information of all the tables of the database. Now the question is which one is better – honestly – it is not about one is better than other. Use the one which you prefer to use. I prefer to use second one as it requires less typing. Let me ask you the same question to you – which method to get schema name do yo use? and Why? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL System Table, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Why is my root filesystem always scanned at boot?

    - by luri
    I always have a pause at boot saying my filesystems are being checked (with a "press C to cancel" note, too). Actually (seeing boot.log) I think it's the / fs, which is located at /dev/sdb5 Several questions altoghether, here (hope this does not break any rule): Is this normal? Can I (or even should I) prevent this anyhow? According to boot.log (below) the fs does not seem to be 'clean', or, at least, it's in an state or condition that makes fsck always can it for errors for a while (just a few seconds). How can I fix it? Edit: This is my boot.log: fsck desde util-linux-ng 2.17.2 udevd[515]: can not read '/etc/udev/rules.d/z80_user.rules' /dev/sdb5: 249045/32841728 ficheros (0.3% no contiguos), 20488485/131338752 bloques init: ureadahead-other main process (1111) terminated with status 4 init: ureadahead-other main process (1116) terminated with status 4 Password: * Starting AppArmor profiles [160G Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox [154G[ OK ] * Setting sensors limits [160G [154G[ OK ] And this is dumpe2fs results for the filesystem being checked (well, the relevant part of the log): Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32841728 Block count: 131338752 Reserved block count: 6566937 Free blocks: 110850356 Free inodes: 32592701 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Dec 10 19:44:15 2010 Last mount time: Mon Feb 14 17:00:02 2011 Last write time: Mon Feb 14 16:59:45 2011 Mount count: 1 Maximum mount count: 33 Last checked: Mon Feb 14 16:59:45 2011 Check interval: 15552000 (6 months) Next check after: Sat Aug 13 17:59:45 2011 Lifetime writes: 331 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28049496 Default directory hash: half_md4 Directory Hash Seed: d3d24459-514b-4413-b840-e970b766095b Journal backup: inode blocks Journal features: journal_incompat_revoke Tamaño de fichero de transacciones: 128M Journal length: 32768 Journal sequence: 0x0005e0c4 Journal start: 1 This is the relevant (at least I think this is the fs being checked) line in fstab: #Entry for /dev/sdb5 : UUID=42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 / ext4 errors=remount-ro 0 1

    Read the article

  • Jump handling and gravity

    - by sprawl
    I'm new to game development and am looking for some help on improving my jump handling for a simple side scrolling game I've made. I would like to make the jump last longer if the key is held down for the full length of the jump, otherwise if the key is tapped, make the jump not as long. Currently, how I'm handling the jumping is the following: Player.prototype.jump = function () { // Player pressed jump key if (this.isJumping === true) { // Set sprite to jump state this.settings.slice = 250; if (this.isFalling === true) { // Player let go of jump key, increase rate of fall this.settings.y -= this.velocity; this.velocity -= this.settings.gravity * 2; } else { // Player is holding down jump key this.settings.y -= this.velocity; this.velocity -= this.settings.gravity; } } if (this.settings.y >= 240) { // Player is on the ground this.isJumping = false; this.isFalling = false; this.velocity = this.settings.maxVelocity; this.settings.y = 240; } } I'm setting isJumping on keydown and isFalling on keyup. While it works okay for simple use, I'm looking for a better way handle jumping and gravity. It's a bit buggy if the gravity is increased (which is why I had to put the last y setting in the last if condition in there) on keyup, so I'd like to know a better way to do it. Where are some resources I could look at that would help me better understand how to handle jumping and gravity? What's a better approach to handling this? Like I said, I'm new to game development so I could be doing it completely wrong. Any help would be appreciated.

    Read the article

  • LASTDATE dates arguments and upcoming events #dax #tabular #powerpivot

    - by Marco Russo (SQLBI)
    Recently I had to write a DAX formula containing a LASTDATE within the logical condition of a FILTER: I found that its behavior was not the one I expected and I further investigated. At the end, I wrote my findings in this article on SQLBI, which can be applied to any Time Intelligence function with a <dates> argument.The key point is that when you write LASTDATE( table[column] )in reality you obtain something like LASTDATE( CALCULATETABLE( VALUES( table[column] ) ) )which converts an existing row context into a filter context.Thus, if you have something like FILTER( table, table[column] = LASTDATE( table[column] ) the FILTER will return all the rows of table, whereas you probably want to use FILTER( table, table[column] = LASTDATE( VALUES( table[column] ) ) )so that the existing filter context before executing FILTER is used to get the result from VALUES( table[column] ), avoiding the automatic expansion that would include a CALCULATETABLE that would hide the existing filter context.If after reading the article you want to get more insights, read the Jeffrey Wang's post here.In these days I'm speaking at SQLRally Nordic 2012 in Copenhagen and I will be in Cologne (Germany) next week for a SSAS Tabular Workshop, whereas Alberto will teach the same workshop in Amsterdam one week later. Both workshops still have seats available and the Amsterdam's one is still in early bird discount until October 3rd!Then, in November I expect to meet many blog readers at PASS Summit 2012 in Seattle and I hope to find the time to write other article on interesting things on Tabular and PowerPivot. Stay tuned!

    Read the article

  • Server outputs the sourcecode of PHP page

    - by Akhilesh B Chandran
    I have a Shared Hosting package with HostGator. In it, I'm hosting around 4 websites. They are just some simple sites that doesn't likely to attract more visitors. But a few days ago, when I accessed one of my sites(via a browser), it outputted the PHP code of index.php, instead of outputting it as HTML. I think, at that time, the server was a bit busy or something. I heard that, Facebook also have got a similar condition where the home page's code was made available. So, how do I take preventive measures for this ? I always use phpBB forum's style of coding. That is, each sub pages, common functions, etc. are separated into subfolders. And in PHP, I would just include_once() or require_once() it. Also, these subfolders have a .htaccess file in which I have set the deny permission to the files inside it from outside. Also, in the main page(index), I would define a constant. And the first line of the subpages(which is situated in separate folders) is to check whether this constant is set. If not, calls die(). I am looking forward for solutions to this problem of outputting raw code when the page is accessed. Thanks in advance :)

    Read the article

  • Fun Upgrading to .Net 4.0

    - by Sam Abraham
    We are currently in the process of upgrading one of our applications to .Net 4.0. Aside from us geeks wanting to always use latest and greatest technologies, an immediate business need for Silverlight 4.0 features justified our upgrade endeavor. The following is a summary of some issues we ran into with our web project:   For security purposes, the IIS 7 .Net 4.0 ISAPI filter is disabled. “Allow” it from the ISAPI and CGI Restrictions screen as shown:   Figure 1 - Allowing ASP.Net 4.0 ISAPI Filter   By default the Web Setup Project only requires the .Net Framework 4 Client Profile to be installed on target system, which offers a lighter weight install for client machines consuming .Net 4.0 applications. However, using certain .Net 4.0 features requires the full .Net 4.0 Framework as outlined in this link: http://msdn.microsoft.com/en-us/library/cc656912.aspx. We hence needed to update the installer to require the complete .Net 4.0 Framework on the target machine and to prompt for its installation if needed.   To accomplish this goal, we updated the installer’s launch conditions to check for .Net 4.0 as well as the installer prerequisites as shown:     Figure 2- Ensure Web Setup Project runs on full .Net 4.0 version Figure 3 - Launch Conditions screen Figure 4 - Set launch condition to .Net 4.0. Figure 5 -Changing installer prerequisites Figure 6 -Changing installer prerequisites

    Read the article

  • Validating data to nest if or not within try and catch

    - by Skippy
    I am validating data, in this case I want one of three ints. I am asking this question, as it is the fundamental principle I'm interested in. This is a basic example, but I am developing best practices now, so when things become more complicated later, I am better equipped to manage them. Is it preferable to have the try and catch followed by the condition: public static int getProcType() { try { procType = getIntInput("Enter procedure type -\n" + " 1 for Exploratory,\n" + " 2 for Reconstructive, \n" + "3 for Follow up: \n"); } catch (NumberFormatException ex) { System.out.println("Error! Enter a valid option!"); getProcType(); } if (procType == 1 || procType == 2 || procType == 3) { hrlyRate = hrlyRate(procType); procedure = procedure(procType); } else { System.out.println("Error! Enter a valid option!"); getProcType(); } return procType; } Or is it better to put the if within the try and catch? public static int getProcType() { try { procType = getIntInput("Enter procedure type -\n" + " 1 for Exploratory,\n" + " 2 for Reconstructive, \n" + "3 for Follow up: \n"); if (procType == 1 || procType == 2 || procType == 3) { hrlyRate = hrlyRate(procType); procedure = procedure(procType); } else { System.out.println("Error! Enter a valid option!"); getProcType(); } } catch (NumberFormatException ex) { System.out.println("Error! Enter a valid option!"); getProcType(); } return procType; } I am thinking the if within the try, may be quicker, but also may be clumsy. Which would be better, as my programming becomes more advanced?

    Read the article

  • Pagination for product listing, what to use? "canonical" or "rel-prev-next" or do nothing?

    - by Jayapal Chandran
    I want to make sure my product listing is 10 products per page which are not in a series (link). They have explained how to use canonical or rel prev for pagination when a long page has been divided into multiple page and the multiple pages becomes a series were as my condition is not that. They are unique listing which are not related to each listing... All the listing links leads to a product profile page. So lets say my site is all about cars and I have a Used Audi page with 1000 Audi's for sale. There are 10 used audi cars on each page so there's 100 pages in the series. If I start to utilise Rel="prev" and rel="next" should I set page 2 onwards as index,follow or noindex,follow? The content on Page 2 all the way to 100 only changes ever so slightly as different cars will be for sale on different pages but from a "Panda" point of view the pages are incredibly similar as they'd hold the same meta data as page 1 in the series along with duplicate reviews & news etc. I want Page 1 in the series as the Main page for Google to send users too and I don't see the point in Google indexing page 2 100. What's everyone's view on this? Lastly with the rel="canonical" tag should page 2 to 100 all point back to page 1 in the series or the individual page itself? E.G: /used-audi/page-3/.

    Read the article

  • Efficiently separating Read/Compute/Write steps for concurrent processing of entities in Entity/Component systems

    - by TravisG
    Setup I have an entity-component architecture where Entities can have a set of attributes (which are pure data with no behavior) and there exist systems that run the entity logic which act on that data. Essentially, in somewhat pseudo-code: Entity { id; map<id_type, Attribute> attributes; } System { update(); vector<Entity> entities; } A system that just moves along all entities at a constant rate might be MovementSystem extends System { update() { for each entity in entities position = entity.attributes["position"]; position += vec3(1,1,1); } } Essentially, I'm trying to parallelise update() as efficiently as possible. This can be done by running entire systems in parallel, or by giving each update() of one system a couple of components so different threads can execute the update of the same system, but for a different subset of entities registered with that system. Problem In reality, these systems sometimes require that entities interact(/read/write data from/to) each other, sometimes within the same system (e.g. an AI system that reads state from other entities surrounding the current processed entity), but sometimes between different systems that depend on each other (i.e. a movement system that requires data from a system that processes user input). Now, when trying to parallelize the update phases of entity/component systems, the phases in which data (components/attributes) from Entities are read and used to compute something, and the phase where the modified data is written back to entities need to be separated in order to avoid data races. Otherwise the only way (not taking into account just "critical section"ing everything) to avoid them is to serialize parts of the update process that depend on other parts. This seems ugly. To me it would seem more elegant to be able to (ideally) have all processing running in parallel, where a system may read data from all entities as it wishes, but doesn't write modifications to that data back until some later point. The fact that this is even possible is based on the assumption that modification write-backs are usually very small in complexity, and don't require much performance, whereas computations are very expensive (relatively). So the overhead added by a delayed-write phase might be evened out by more efficient updating of entities (by having threads work more % of the time instead of waiting). A concrete example of this might be a system that updates physics. The system needs to both read and write a lot of data to and from entities. Optimally, there would be a system in place where all available threads update a subset of all entities registered with the physics system. In the case of the physics system this isn't trivially possible because of race conditions. So without a workaround, we would have to find other systems to run in parallel (which don't modify the same data as the physics system), other wise the remaining threads are waiting and wasting time. However, that has disadvantages Practically, the L3 cache is pretty much always better utilized when updating a large system with multiple threads, as opposed to multiple systems at once, which all act on different sets of data. Finding and assembling other systems to run in parallel can be extremely time consuming to design well enough to optimize performance. Sometimes, it might even not be possible at all because a system just depends on data that is touched by all other systems. Solution? In my thinking, a possible solution would be a system where reading/updating and writing of data is separated, so that in one expensive phase, systems only read data and compute what they need to compute, and then in a separate, performance-wise cheap, write phase, attributes of entities that needed to be modified are finally written back to the entities. The Question How might such a system be implemented to achieve optimal performance, as well as making programmer life easier? What are the implementation details of such a system and what might have to be changed in the existing EC-architecture to accommodate this solution?

    Read the article

  • Parallelize incremental processing in Tabular #ssas #tabular

    - by Marco Russo (SQLBI)
    I recently came in a problem trying to improve the parallelism of Tabular processing. As you know, multiple tables can be processed in parallel, whereas the processing of several partitions within the same table cannot be parallelized. When you perform an incremental update by adding only new rows to existing table, what you really do is adding rows to a partition, so adding rows to many tables means adding rows to several partitions. The particular condition you have in this case is that every partition in which you add rows belongs to a different table. Adding rows implies using the ProcessAdd command; its QueryBinding parameter specifies a SQL syntax to read new rows, otherwise the original query specified for the partition will be used, and it could generate duplicated data if you don’t have a dynamic behavior on the SQL side. If you create the required XMLA code manually, you will find that the QueryBinding node that should be part of the ProcessAdd command has to be moved out from ProcessAdd in case you are using a Batch command with more than one Process command (which is the reason why you want to use a single batch: run multiple process operations in parallel!). If you use AMO (Analysis Management Objects) you will find that this combination is not supported, even if you don’t have a syntax error compiling the code, but you might obtain this error at execution time: The syntax for the 'Process' command is incorrect. The 'Bindings' keyword cannot appear under a 'Process' command if the 'Process' command is a part of a 'Batch' command and there are more than one 'Process' commands in the 'Batch' or the 'Batch' command contains any out of line related information. In this case, the 'Bindings' keyword should be a part of the 'Batch' command only. If this is happening to you, the best solution I’ve found is manipulating the XMLA code generated by AMO moving the Binding nodes in the right place. A more detailed description of the issue and the code required to send a correct XMLA batch to Analysis Services is available in my article Parallelize ProcessAdd with AMO. By the way, the same technique (and code) can be used also if you have the same problem in a Multidimensional model.

    Read the article

  • PASS Summit 2013 Review

    - by Ajarn Mark Caldwell
    As a long-standing member of PASS who lives in the greater Seattle area and has attended about nine of these Summits, let me start out by saying how GREAT it was to go to Charlotte, North Carolina this year.  Many of the new folks that I met at the Summit this year, upon hearing that I was from Seattle, commented that I must have been disappointed to have to travel to the Summit this year after 5 years in a row in Seattle.  Well, nothing could be further from the truth.  I cheered loudly when I first heard that the 2013 Summit would be outside Seattle.  I have many fond memories of trips to Orlando, Florida and Grapevine, Texas for past Summits (missed out on Denver, unfortunately).  And there is a funny dynamic that takes place when the conference is local.  If you do as I have done the last several years and saved my company money by not getting a hotel, but rather just commuting from home, then both family and coworkers tend to act like you’re just on a normal schedule.  For example, I have a young family, and my wife and kids really wanted to still see me come home “after work”, but there are a whole lot of after-hours activities, social events, and great food to be enjoyed at the Summit each year.  Even more so if you really capitalize on the opportunities to meet face-to-face with people you either met at previous summits or have spoken to or heard of, from Twitter, blogs, and forums.  Then there is also the lovely commuting in Seattle traffic from neighboring cities rather than the convenience of just walking across the street from your hotel.  So I’m just saying, there are really nice aspects of having the conference 2500 miles away. Beyond that, the training was fantastic as usual.  The SQL Server community has many outstanding presenters and experts with deep knowledge of the tools who are extremely willing to share all of that with anyone who wants to listen.  The opening video with PASS President Bill Graziano in a NASCAR race turned dream sequence was very well done, and the keynotes, as usual, were great.  This year I was particularly impressed with how well attended were the Professional Development sessions.  Not too many years ago, those were very sparsely attended, but this year, the two that I attended were standing-room only, and these were not tiny rooms.  I would say this is a testament to both the maturity of the attendees realizing how important these topics are to career success, as well as to the ever-increasing skills of the presenters and the program committee for selecting speakers and topics that resonated with people.  If, as is usually the case, you were not able to get to every session that you wanted to because there were just too darn many good ones, I encourage you to get the recordings. Overall, it was a great time as these events always are.  It was wonderful to see old friends and make new ones, and the people of Charlotte did an awesome job hosting the event and letting their hospitality shine (extra kudos to SQLSentry for all they did with the shuttle, maps, and other event sponsorships).  We’re back in Seattle next year (it is a release year, after all) but I would say that with the success of this year’s event, I strongly encourage the Board and PASS HQ to firmly reestablish the location rotation schedule.  I’ll even go so far as to suggest standardizing on an alternating Seattle – Charlotte schedule, or something like that. If you missed the Summit this year, start saving now, and register early, so you can join us!

    Read the article

  • Breakout ball collision detection, bouncing against the walls

    - by Sri Harsha Chilakapati
    I'm currently trying to program a breakout game to distribute it as an example game for my own game engine. http://game-engine-for-java.googlecode.com/ But the problem here is that I can't get the bouncing condition working properly. Here's what I'm using. public void collision(GObject other){ if (other instanceof Bat || other instanceof Block){ bounce(); } else if (other instanceof Stone){ other.destroy(); bounce(); } //Breakout.HIT.play(); } And here's by bounce() method public void bounce(){ boolean left = false; boolean right = false; boolean up = false; boolean down = false; if (dx < 0) { left = true; } else if (dx > 0) { right = true; } if (dy < 0) { up = true; } else if (dy > 0) { down = true; } if (left && up) { dx = -dx; } if (left && down) { dy = -dy; } if (right && up) { dx = -dx; } if (right && down) { dy = -dy; } } The ball bounces the bat and blocks but when the block is on top of the ball, it won't bounce and moves upwards out of the game. What I'm missing? Is there anything to implement? Please help me.. Thanks

    Read the article

  • Assigning valid moves on board game

    - by Kunal4536
    I am making a board game in unity 4.3 2d similar to checkers. I have added an empty object to all the points where player can move and added a box collider to each empty object.I attached a click to move script to each player token. Now I want to assign valid moves. e.g. as shown in picture... Players can only move on vertex of each square.Player can only move to adjacent vertex.Thus it can only move from red spot to yellow and cannot move to blue spot.There is another condition which is : if there is the token of another player at the yellow spot then the player cannot move to that spot. Instead it will have to go from red to green spot. How can I find the valid moves of the player by scripting. I have another problem with click to move. When I click all the objects move to that position.But I only want to move a single token. So what can i add to script to select a specific object and then click to move the specific object.Here is my script for click to move. var obj:Transform; private var hitPoint : Vector3; private var move: boolean = false; private var startTime:float; var speed = 1; function Update () { if(Input.GetKeyDown(KeyCode.Mouse0)) { var hit : RaycastHit; // no point storing this really var ray = Camera.main.ScreenPointToRay (Input.mousePosition); if (Physics.Raycast (ray, hit, 10000)) { hitPoint = hit.point; move = true; startTime = Time.time; } } if(move) { obj.position = Vector3.Lerp(obj.position, hitPoint, Time.deltaTime * speed); if(obj.position == hitPoint) { move = false; } } }`

    Read the article

  • Windows 8 BIOS - Boot Ubuntu from External HDD

    - by F3AR3DLEGEND
    My laptop came pre-loaded with Windows 8 64-bit (only storage device is a 128 GB SSD). Since it is my school laptop/I've heard creating a Linux partition alongside Windows 8 is not very wise I installed Ubuntu onto my external hard drive. I have a 500GB external HDD with the following partitions: Main Partition - NFTS - ~400 GB Extension Partition / - ext2 - ~25gb /home - ext2 - ~30gb swap - ext2 - 10gb /boot - ? - 10gb ? = not sure of partition Using the PenDriveLinux installer, I created a LiveUSB version of Ubuntu 12.04 (LTS) on a 4GB USB drive. Using that, I installed Ubuntu onto the external hard-drive, without any errors (or at least none that I was notified of). Using the BIOS settings, I changed the OS-loading order so that it is in this order: My External USB HDD Windows Boot Loader Some other things Therefore, Ubuntu should load from my hard drive first, but it doesn't. Also, my hard drive is in working condition, and it turns on when BIOS starts (there is a light indicator). When I start my laptop, it goes directly to Windows 8 (I have the fast startup setting disabled as well). So, is there any way for me to set it up so that when my HDD is connected, it will automatically load Ubuntu? Thanks in advance!

    Read the article

  • How to Generate a Create Table DDL Script Along With Its Related Tables

    - by Compudicted
    Have you ever wondered when creating table diagrams in SQL Server Management Studio (SSMS) how slickly you can add related tables to it by just right-clicking on the interesting table name? Have you also ever needed to script those related tables including the master one? And you discovered you have dozens of related tables? Or may be no SSMS at your disposal? That was me one day. Well, creativity to the rescue! I Binged and Googled around until I found more or less what I wanted, but it was all involving T-SQL, yeah, a long and convoluted CROSS APPLYs, then I saw a PowerShell solution that I quickly adopted to my needs (I am not referencing any particular author because it was a mashup): 1: ########################################################################################################### 2: # Created by: Arthur Zubarev on Oct 14, 2012 # 3: # Synopsys: Generate file containing the root table CREATE (DDL) script along with all its related tables # 4: ########################################################################################################### 5:   6: [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') | out-null 7:   8: $RootTableName = "TableName" # The table name, no schema name needed 9:   10: $srv = new-Object Microsoft.SqlServer.Management.Smo.Server("TargetSQLServerName") 11: $conContext = $srv.ConnectionContext 12: $conContext.LoginSecure = $True 13: # In case the integrated security is not used uncomment below 14: #$conContext.Login = "sa" 15: #$conContext.Password = "sapassword" 16: $db = New-Object Microsoft.SqlServer.Management.Smo.Database 17: $db = $srv.Databases.Item("TargetDatabase") 18:   19: $scrp = New-Object Microsoft.SqlServer.Management.Smo.Scripter($srv) 20: $scrp.Options.NoFileGroup = $True 21: $scrp.Options.AppendToFile = $False 22: $scrp.Options.ClusteredIndexes = $False 23: $scrp.Options.DriAll = $False 24: $scrp.Options.ScriptDrops = $False 25: $scrp.Options.IncludeHeaders = $True 26: $scrp.Options.ToFileOnly = $True 27: $scrp.Options.Indexes = $False 28: $scrp.Options.WithDependencies = $True 29: $scrp.Options.FileName = 'C:\TEMP\TargetFileName.SQL' 30:   31: $smoObjects = New-Object Microsoft.SqlServer.Management.Smo.UrnCollection 32: Foreach ($tb in $db.Tables) 33: { 34: Write-Host -foregroundcolor yellow "Table name being processed" $tb.Name 35: 36: If ($tb.IsSystemObject -eq $FALSE -and $tb.Name -eq $RootTableName) # feel free to customize the selection condition 37: { 38: Write-Host -foregroundcolor magenta $tb.Name "table and its related tables added to be scripted." 39: $smoObjects.Add($tb.Urn) 40: } 41: } 42:   43: # The actual act of scripting 44: $sc = $scrp.Script($smoObjects) 45:   46: Write-host -foregroundcolor green $RootTableName "and its related tables have been scripted to the target file." Enjoy!

    Read the article

  • Announcing Oracle Receivables Generic Data Fix (GDF) for Refunds

    - by user793553
    Here's the first of what will be a series of Generic Data Fixes (GDF) to be released by Receivables Development. Generic Data Fix (GDF) are created by development to fix data issues caused by bugs/issues in the application code.  Other Generic Data Fix benefits/features include: Developed for bugs that can cause data issues. Provides a SELECT script that uses an identification/signature query to identify and report all data affected by issue/condition caused by a bug. Allow customers to view and modify what will be fixed. Provides a separate FIX script to fix the data reported by the SELECT script. The FIX script creates backup tables for the data that is fixed/updated. Available on My Oracle Support for download In Release 12, when creating a refund by either of the following methods: Applied a receipt to the Refund activity - which creates an Invoice in Payables Or you went directly into Payables to create a refund for an open Credit Memo in Receivables The Invoice in Payables that is associated to the refund is cancelled, the corresponding refund application or credit memo in Receivables is not properly re-instated. For the receipt application, it still remains applied to the Refund whereas this should be automatically unapplied. For the credit memo, it stays closed instead of getting re-opened. Doc ID 761993.1 includes the patch to make sure this doesn’t happen in the future as well as a GDF script to fix the current data (Script name: ar_std_refund_unapp.sql).  Download the script and run in READ_ONLY_MODE to identify 'refund' applications with this problem. Stay tuned for more GDF scripts coming soon...

    Read the article

  • Execution plan warnings–All that glitters is not gold

    - by Dave Ballantyne
    In a previous post, I showed you the new execution plan warnings related to implicit and explicit warnings.  Pretty much as soon as i hit ’post’,  I noticed something rather odd happening. This statement : select top(10) SalesOrderHeader.SalesOrderID, SalesOrderNumberfrom Sales.SalesOrderHeaderjoin Sales.SalesOrderDetail on SalesOrderHeader.SalesOrderID = SalesOrderDetail.SalesOrderID   Throws the “Type conversion may affect cardinality estimation” warning.     Ive done no such conversion in my statement why would that be ?  Well, SalesOrderNumber is a computed column , “(isnull(N'SO'+CONVERT([nvarchar](23),[SalesOrderID],0),N'*** ERROR ***'))”,  so thats where the conversion is.   Wait!!! Am i saying that every type conversion will throw the warning ?  Thankfully, no.  It only appears for columns that are used in predicates ,even if the predicate / join condition is fine ,  and the column is indexed ( and/or , presumably has statistics).    Hopefully , this wont lead to to many wild goose chases, but is definitely something to bear in mind.  If you want to see this fixed then upvote my connect item here.

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >