Search Results

Search found 30575 results on 1223 pages for 'number systems'.

Page 110/1223 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Hibernate CreateSQL Query Problem

    - by Shaded
    Hello All I'm trying to use hibernates built in createsql function but it seems that it doesn't like the following query. List =hibernateSession.createSQLQuery("SELECT number, location FROM table WHERE other_number IN (SELECT f.number FROM table2 AS f JOIN table3 AS g on f.number = g.number WHERE g.other_number = " + var + ") ORDER BY number").addEntity(Table.class).list(); I have a feeling it's from the nested select statement, but I'm not sure. The inner select is used elsewhere in the code and it returns results fine. This is my mapping for the first table: <hibernate-mapping> <class name="org.efs.openreports.objects.Table" table="table"> <id name="id" column="other_number" type="java.lang.Integer"> <generator class="native"/> </id> <property name="number" column="number" not-null="true" unique="true"/> <property name="location" column="location" not-null="true" unique="true"/> </class> </hibernate-mapping> And the .java public class Table implements Serializable { private Integer id;//panel_facility private Integer number; private String location; public Table() { } public void setId(Integer id) { this.id = id; } public Integer getId() { return id; } public void setNumber(Integer number) { this.number = number; } public Integer number() { return number; } public String location() { return location; } public void setLocation(String location) { this.location = location; } } Any suggestions? Edit (Added mapping)

    Read the article

  • How to get number of attributes in a java class?

    - by llm
    I have a java class containing all the columns of a database table as attributes (member variables) and corresponding getters and setters. I want to have a method in this class called getColumnCount() that returns the number of columns (i.e. the number of attributes in the class)? How would I implement this without hardcoding the number? I am open to critisims on this in general and suggestions. Thanks.

    Read the article

  • php, js - echoing a number based on how many children there are?

    - by benbyford
    Creating a site in with FrogCMS and using alot of jquery to create an all in one screen site. having trouble in php echoing a number for the number of children for a subject in the database which i can then use as a var in js. some things ive tried, but im not really a developer so having issues here is some code frog supplies to show to utilise articles for example, and articles then has childen in the database. find('/articles/'); ? children(array('limit' = 1, 'order' = 'page.created_on DESC')); ? but im not sure how to create a discrete number to use with the jquery concerning the number of children?? thanks

    Read the article

  • Can a real number "cover" all integers within its range?

    - by macias
    Is there a guarantee that a real number (float, double, etc) can "cover" all integers within its range? By cover I mean, that for every integer within its range there is such real number that this equality holds: real == int Or in another example, let's say I have the biggest real number which is smaller than given integer. When I add "epsilon" will I get this number equal to given integer or bigger than integer? (I know that among real numbers you should not write comparisons as == for equality, I am simply asking for better understanding subject, not for coding comparisons.)

    Read the article

  • How to find the remainder of large number division in C++?

    - by Beelzeboul
    Hello, I have a question regarding modulus in C++. What I was trying to do was divide a very large number, lets say for example, M % 2, where M = 54,302,495,302,423. However, when I go to compile it says that the number is to 'long' for int. Then when I switch it to a double it repeats the same error message. Is there a way I can do this in which I will get the remainder of this very large number or possibly an even larger number? Thanks for your help, much appreciated.

    Read the article

  • How can I add to List<? extends Number> data structures?

    - by kunjaan
    I have a List which is declared like this : List<? extends Number> foo3 = new ArrayList<Integer>(); I tried to add 3 to foo3. However I get an error message like this: The method add(capture#1-of ? extends Number) in the type List<capture#1-of ? extends Number> is not applicable for the arguments (ExtendsNumber)

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Have you ever seen an install of IE 8 whose version number was still 6.0?

    - by Justin
    I was at a local university computer lab presenting a website I work on and I discovered something that looked really unusual to me. Their machines had Internet Explorer 8 installed, but when you check the version number (Help-About Internet Explorer) it listed the version number as 6.0. It also gave me an "Operation Aborted" error that is supposed to be gone in IE8. Has anyone else run across this situation?

    Read the article

  • Is the port number the same when connecting to git via the git+ssh protocol?

    - by Tomek
    Hi all. I was wondering when connecting to a git repository, does the git+ssh protocol use the same port number as just using the git protocol. For example: git://example.com/git/helloworld git+ssh://[email protected]/git/helloworld I am trying to push to a remote repository that has port forwarding setup on only the git protocol port number (9418) using EGit. When I try and use the git+ssh, EGit tells me git+ssh://.... connection is closed by foreign host Thanks, Tomek

    Read the article

  • How do I activate my gizmo5 phone number in Google Voice? [closed]

    - by Sorin Sbarnea
    I wasn't able to activate my gizmo5 number because Google Voice activation(verification) requires you to enter two dial tones (DTMF) and they did not work at least not with these two variants: Using gizmo5 PC client using fring from Iphone as gizmo5 SIP client Redirecting gizmo5 to a US mobile number None of the above methods worked for me. Any ideas? More info: http://www.google.com/support/forum/p/voice/thread?tid=1d8c1d99721e3509&hl=en http://googlevoices.blogspot.com/2009/04/forwarding-sip-calls-to-google-voice.html

    Read the article

  • Windows Server 2003 Is there a limit on number of TCP connections per process?

    - by aceinthehole
    We are running into issues with BizTalk host instance intermittently going down. One of the things that we are worried about is the number of FTP connections a single host instance is making which could easily reach into the hundreds perhaps sometimes thousands, depending on traffic. My question is Windows Server 2003 Is there a limit on number of TCP connections per process? If so would putting each application in it's own host instance potentially solve the problem.

    Read the article

  • High number of ethernet errors. Tool for testing the ethernet card?

    - by Fabio Dalla Libera
    I have an Asus Sabertooth X79. I often get corrupted files. I checked the RAM, but memtest finds no errors. To avoid the possibility of disk errors, I tried copying the files to tmpfs. If I copy from the network, I get md5sum mismatches about once every 10 times using a 6Gb file. Copying from RAM to RAM, I didn't get mismatches. I get a very high number of errors in ifconfig (compared to others PCs I just took as reference, which have 0 with much more traffic). Here is an example RX packets:13972848 errors:200 dropped:0 overruns:0 frame:101 The motherboard is new, but do you think there're some problems with it? What could I use to test the (integrated) network adapter? What else do you think I should double check? --edit-- I tried another NIC, it gives a lot of Corrupted MAC on Input. Disconnecting: Packet corrupt lost connection. I noticed that another PC downloads at 11.1MB/s without problems. This pc at 66.0 MB/s. Is there any way to try to limit the speed?

    Read the article

  • What can you do to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue is greatly appreciated. Thanks.

    Read the article

  • What can be done to decrease the number of live issues with applications?

    - by User Smith
    First off I have seen this post which is slightly similar to my question. : What can you do to decrease the number of deployment bugs of a live website? Let me layout the situation for you. The team of programmers that I belong to have metrics associated with our code. Over the last several months our errors in our live system have increased by a large amount. We require that our updates to applications be tested by at least one other programmer prior to going live. I personally am completely against this as I think that applications should be tested by end users as end users are much better testers than programmers, I am not against programmers testing, obviously programmers need to test code, but they are most of the times too close to the code. The reason I specify that I think end users should test in our scenario is due to the fact that we don't have business analysts, we just have programmers. I come from a background where BAs took care of all the testing once programmers checked off it was ready to go live. We do have a staging environment in place that is a clone of the live environment that we use to ensure that we don't have issues between development and live environments this does catch some bugs. We don't do end user testing really at all, I should say we don't really have anyone testing our code except programmers, which I think gets us into this mess (Ideally, we would have BAs or QA or professional testers test). We don't have a QA team or anything of that nature. We don't have test cases for our projects that are fully laid out. Ok, I am just a peon programmer at the bottom of the rung, but I am probably more tired of these issues than the managers complaining about them. So, I don't have the ability to tell them you are doing it all wrong.....I have tried gentle pushes in the correct direction. Any advice or suggestions on how to alleviate this issue ?

    Read the article

  • How can I judge the suitability of modern processors for systems with specific CPU requirements?

    - by Iszi
    Inspired by this question: How do I calculate clock speed in multi-core processors? The answers in the above question do a fair job of explaining why a lower-speed multi-core processor won't necessarily perform at the same level as a higher-speed single-core processor. Example: 4*2=8, but a quad-core 2 GHz processor isn't necessarily as fast as a single-core 8 GHz processor. However, I'm having a hard time putting the information in those answers to practical use in my mind. Particularly, I want to know how it should be used to judge whether a given CPU is appropriate for an application with specific requirements. Example scenarios: An application has a minimum CPU requirement of 2.4 GHz dual-core. Another application has a minimum CPU requirement of 1.8 GHz single-core. For either of the above scenarios: Would a higher-speed processor with fewer cores, or a lower-speed processor with more cores, be equally sufficient? If so, how can we determine the appropriate processor speeds required for a given number of cores?

    Read the article

  • Should the number of developers be considered when estimating a task?

    - by Ludwig Magnusson
    I am pretty inexperienced with working in agile projects but I have tried it a few times and I always run into this problem when estimating a task. Do we bring into the estimate the number of developers that will work on the task? Let me explain: Task A is estimated to one time unit and developer 1 will work on it. Task B is also estimated to one time unit and developer 2 and 3 will work on it together. I.e. if developer 1 begins to work on task A at the same time developer 2 and 3 begins to work on task B they will all finish at the same time according to the estimate. Should the estimate for task B be twice of that for task A or the same? The problem as I see it is that when a task is received and estimated, it is not always possible to know how many people will work on it. And if you assumed that two developers would work on the task for one time unit but it turns out that only one developer will actually do it, this will not automatically mean that that developer will work on it for two time units. Is there any standard practice for this?

    Read the article

  • What is a 'good number' of exceptions to implement for my library?

    - by Fuzz
    I've always wondered how many different exception classes I should implement and throw for various pieces of my software. My particular development is usually C++/C#/Java related, but I believe this is a question for all languages. I want to understand what is a good number of different exceptions to throw, and what the developer community expect of a good library. The trade-offs I see include: More exception classes can allow very fine grain levels of error handling for API users (prone to user configuration or data errors, or files not being found) More exception classes allows error specific information to be embedded in the exception, rather than just a string message or error code More exception classes can mean more code maintenance More exception classes can mean the API is less approachable to users The scenarios I wish to understand exception usage in include: During 'configuration' stage, which might include loading files or setting parameters During an 'operation' type phase where the library might be running tasks and doing some work, perhaps in another thread Other patterns of error reporting without using exceptions, or less exceptions (as a comparison) might include: Less exceptions, but embedding an error code that can be used as a lookup Returning error codes and flags directly from functions (sometimes not possible from threads) Implemented an event or callback system upon error (avoids stack unwinding) As developers, what do you prefer to see? If there are MANY exceptions, do you bother error handling them separately anyway? Do you have a preference for error handling types depending on the stage of operation?

    Read the article

  • What are the essential considerations for setting up systems in a location with unreliable power?

    - by dunxd
    I deal with a lot of remote offices located in parts of the world where the local grid power supply is unreliable. Power can go off anytime with no warning, with outages ranging from minutes to days Power fluctuation is wild, with spikes and brown outs Currently the offices will have some or all of the following: A generator, with an inverter, or some sort of manual switch A big UPS or battery array connecting a number of devices Several smaller APC UPS with computers attached Low cost Voltage Regulators sometimes connected between mains and UPS or device. I know that each of these things needs to be appropriately rated for the equipment to which it is connected (although I am not sure how to calculate the correct rating). The offices will generally have the following equipment (in varying quantities): some sort of internet connection device (VSAT router, ADSL modem, WiMax router) Cisco ASA 5505 firewall a bunch of PCs printers one server I don't seek to replace the advice of an electrician, but in some of these locations they only answer the questions you ask them, so I need to make sure I have enough understanding of the essentials to protect equipment from damage, and possibly get through some power cuts.

    Read the article

  • Very basic beginner Ruby question to do with elsif and ranges [migrated]

    - by MattKneale
    I've been trying to get to grasps with Ruby (for all of an hour) and this is my first language. I've got the following code: var_comparison = 5 print "Please enter a number: " my_num = Integer(gets.chomp) if my_num > var_comparison print "You picked a number greater than 5!" elsif my_num < var_comparison print "You picked a number less than 5!" elsif my_num > 99 print "Your number is too large, man." else print "You picked the number 5!" end Clearly the interpreter has no way of distinguishing between accepting the rule 5 or 99. How do I make it so that any number between 6-99 returns "You picked a number greater than 5!", but a number 100 or greater returns "Your number is too large, man!"? Do I need to specifically state a range somehow? How would I best do that? Would it by the normal range methods e.g. if my_num 6..99 or if my_num.between(6..99) ?

    Read the article

  • What is the best practice for reading a large number of custom settings from a text file?

    - by jawilmont
    So I have been looking through some code I wrote a few years ago for an economic simulation program. Each simulation has a large number of settings that can be saved to a file and later loaded back into the program to re-run the same/similar simulation. Some of the settings are optional or depend on what is being simulated. The code to read back the parameters is basically one very large switch statement (with a few nested switch statements). I was wondering if there is a better way to handle this situation. One line of the settings file might look like this: #RA:1,MT:DiscriminatoryPriceKDoubleAuction,OF:Demo Output.csv,QM:100,NT:5000,KP:0.5 //continues... And some of the code that would read that line: switch( Character.toUpperCase( s.charAt(0) ) ) { case 'R': randSeed = Integer.valueOf( s.substring(3).trim() ); break; case 'M': marketType = s.substring(3).trim(); System.err.println("MarketType: " + marketType); break; case 'O': outputFileName = s.substring(3).trim() ; break; case 'Q': quantityOfMarkets = Integer.valueOf( s.substring(3).trim() ); break; case 'N': maxTradesPerRound = Integer.valueOf( s.substring(3).trim() ); break; case 'K': kParameter = Float.valueOf( s.substring(3).trim() ); break; // continues... }

    Read the article

  • Should I be worrying about limiting the number of textures in my game?

    - by Donutz
    I am working on a GUI in XNA 4.0 at the moment. (Before you point out that there are many GUIs already in existance, this is as much a learning exercise as a practical project). As you may know, controls and dialogs in Windows actually consist of a number of system-level windows. For instance, a dialog box may consist of a window for the whole dialog, a child window for the client area, another window (barely showing) for the frame, and so on. This makes detecting mouse hits and doing clipping relatively easy. I'd like to design my XNA GUI along the same lines, but using overlapping Textures instead of windows obviously. My question (yes, there's actually a question in this drivel) is: am I at risk of adversely affecting game performance and/or running low in resources if I get too nuts with the creating of many small textures? I haven't been able to find much information on how resource-tight the XNA environment actually is. I grew up in the days of 64K ram so I'm used to obsessing about resources and optimization. Anyway, any feedback appreciated.

    Read the article

  • How do I get any number of links to space evenly? [migrated]

    - by Aerodynamo
    Alright, so here is the situation... Say I have a navbar for a site, and I allow users to change the number of links they want on this navbar. This means they could have 3, 5, 10, etc. What I want to do is make it so that if one link is up, it only takes up, say, 1/5th of the space on the navbar. If I weren't using borders, I might do something like: width: 18%; padding: 0 1%; However, I have two problems with this: 1) For 4 buttons, that's fine that it doesn't fill up the whole row. It would look ugly if the links were too wide... but when I have 6 or 7 buttons, it's got huge overflow! 2) Since I have borders, I can't use a percentage value for the borders or the widths, because I can't properly estimate how much of the percentage it will be. Now, I know I don't have to use percentage values, but what I would ideally prefer is that the first button is the smallest possible size necessary for all the other buttons to fit properly, meaning that if I have 950px and 6 links, the first link can be about 150px while the others are 160px... that's fine. I want all the other buttons on the navbar to be equally sized, regardless of how many links there are. I also need for it to accept a border... I figure the way to do this is to put a border in the nested div, so that way it doesn't effect the overall width of the button? This is all well and good, but I'm still plagued by the issue of not being able to design a dynamic site using the style I want if I can't get all the nav buttons to fit the width properly. Are there some js tricks I could use? I don't even know... Thanks

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >