Search Results

Search found 6172 results on 247 pages for 'limit choices to'.

Page 33/247 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • I have two choices of Master's classes this fall. Which is the most useful?

    - by ahplummer
    (For background purposes and context): I am a Software Engineer, and manage other Software Engineers currently. I kind of wear two hats right now: one of a programmer, and one as a 'team lead'. In this regard, I've started going back to school to get my Master's degree with an emphasis in Computer Science. I already have a Bachelor's in Computer Science, and have been working in the field for about 13 years. Our primary development environment is a Windows environment, writing in .NET, Delphi, and SQL Server. Choice #1: CST 798 DATA VISUALIZATION Course Description: Basically, this is a course on the "Processing" language: http://processing.org/ Choice #2: CST 711 INFORMATICS Course Description: (From catalog): Informatics is the science of the use and processing of data, information, and knowledge. This course covers a variety of applied issues from information technology, information management at a variety of levels, ranging from simple data entry, to the creation, design and implementation of new information systems, to the development of models. Topics include basic information representation, processing, searching, and organization, evaluation and analysis of information, Internet-based information access tools, ethics and economics of information sharing.

    Read the article

  • Is there any way to limit the turbo boost speed / intensity on i7 lap?

    - by Anonymous
    I've just got a used i7 laptop, one of these overheating pavilions from HP with quad cores. And I really want to find a compromise between the temp and performance. If I use linpack, or some other heavy benchmark, the temp easily gets to 95+, and having a TJ of 100 Degrees, for a 2630QM model, it really gets me throttling, that no cooling pad or even an industrial fan could solve. I figured later that it is due to turbo boost, and if I set my power settings to use 99% of the CPU instead of 100%, and it seems to disable the turbo boost, so the temp gets better. But then again it loses quite a bit of performance. The regular clock is 2GHz, and in turbo boost it gets to 2.6Ghz, but I just wonder if I could limit it to around 2.3Ghz, that would be a real nice thing. Also there is another question I've hard time getting answer to. It seems to me that clocks are very quickly boosting up to max even when not needed, eg, it's ok if the CPU has 0% load, the clocks get to their 800MHz, but even if it gets to about 5% it quickly jumps to a max and even popping up turbo, which seems very strange to me. So I wonder if there is any way to adjust the sensitivity of the Speed Step feature. I believe it would be more logical to demand increased clock if it hits let's say 50% load. I do understand that most of these features are probably hardwired somewhere in the CPU itself or the MB, which has no tuning options just like on many laptops. But I would appreciate if you could recommend some thing, or some software. Thanks

    Read the article

  • Is there a limit to how many sites can be hosted on a single IP address when using HTTP Host Headers on Windows 2008?

    - by Kev
    For reasons that are lost in the mists of time, our older Windows (2000, 2003) servers have been configured with a "Administrative" IP address and three further "Hosting" IP addresses. There are also additional IP's for sites with SSL certificates. The "Administrative" IP address is where all our internal provisioning, monitoring and other such apps are bound to. We lock this down and don't permit access to it from the outside world (other than over our VPN). The three "Hosting" IP addresses are used for IIS website hosting (in conjunction with host headers). Historically, new site IP address allocations have been rotated through these three IP addresses. I'm not really sure why. I'm building a new batch of servers and I'm considering just having a single hosting IP address. Our servers can host up to 1200 sites on a single machine. Is there a technical limit to the number of IIS sites that can bind to a single IP address? Our Linux platform seems to do just fine with just a single shared IP + host headers. I initially thought this might be an SEO thing, but given that IPv4 address space conservation is paramount I hardly think Google or other search engines could reasonably penalise site rankings just because hundreds of sites hang off the same IP.

    Read the article

  • What choices do I have for a future in software development?

    - by user354892
    After graduating from college with a degree in mathematics and a minor in computer science, I took over a year to find my first programming job. I very much enjoy my work environment, but sometimes I feel like I'm not developing professionally as quickly as I am capable of. My company does mostly web programming. The project that I work on is basically a front-end for a SQL database. The entire project is (AFAIK) coded is VB.NET. I've been working on the back-end logic of the program. After some time on the job, we decided that designing web pages is not my cup of tea. It seems like the thing that I spend more time on than anything else is searching the web and SO for information about poorly documented .net and web APIs. This does not make me happy; is this normal for a programming job; I don't think it is. It really hit me the other day when i saw a question about math asked here and I didn't know the answer; I feel like my knowledge is going stale! I was previously almost hired by another company where they design graphic-intensive battle simulations for the military. I sometimes wonder what my life might be like if I had that job. I feel like my math and problem solving skills might have been a better fit at this other company. Within the wide field of software development, there are a number of directions in which to go as evidenced by the huge variety of topics discussed here on StackOverflow. I would like to feel like I'm going in the direction where I will make the most of my skills and have a satisfying career. Let me word this question as clearly as possible: given the wide breadth of the field of software development, how does it break down into sub-fields and what are the considerations for developing a software development career. I do not want to manage my career by default.

    Read the article

  • Intermittent 403 errors when using allow to limit access to url with both explicit IP and SetEnvIf

    - by rbieber
    We are running Apache 2.2.22 on a Solaris 10 environment. We have a specific URL that we want to limit access to by IP. We recently implemented a CDN and now have the added complexity that the IP's that a request are shown to be coming from are actually the CDN servers and not the ultimate end user. In the case that we need to back the CDN out, we want to handle the case where either the CDN is forwarding the request, or the ultimate client is sending the request directly. The CDN sends the end user IP address in an HTTP header (for this scenario that header is called "User-IP"). Here is the configuration that we have put in place: SetEnvIf User-IP (\d+\.\d+\.\d+\.\d+) REAL_USER_IP=$1 SetEnvIf REAL_USER_IP "(10\.1\.2\.3|192\.168\..+)" access_allowed=1 <Location /uri/> Order deny,allow Allow from 10.1.2.3 192.168. allow from env=access_allowed Deny from all </Location> This seems to work fine for a time, however at some point the web server starts serving 403 errors to the end user - so for some reason it is restricting access. The odd thing is that a bounce of the web server seems to resolve the issue, but only for a time - then the behavior comes back. It might be worthwhile to note as well that this URL is delegated to a JBoss server via mod_jk. The denial of access is, however; confirmed to be at the Apache layer and the issue only seems to happen after the server has been running for some time.

    Read the article

  • Multidimensional multiple-choice knapsack problem: find a feasible solution

    - by Onheiron
    My assignment is to use local search heuristics to solve the Multidimensional multiple-choice knapsack problem, but to do so I first need to find a feasible solution to start with. Here is an example problem with what I tried so far. Problem R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 11.0 3 2 2 12.0 1 1 3 G2: 20.0 1 1 3 5.0 2 3 2 G3: 10.0 2 2 3 30.0 1 1 3 Sorting strategies To find a starting feasible solution for my local search I decided to ignore maximization of gains and just try to fit the resources requirements. I decided to sort the choices (strategies) in each group by comparing their "distance" from the multidimensional space origin, thus calculating SQRT(R1^2 + R2^2 + ... + RN^2). I felt like this was a keen solution as it somehow privileged those choices with resouce usages closer to each other (e.g. R1:2 R2:2 R3:2 < R1:1 R2:2 R3:3) even if the total sum is the same. Doing so and selecting the best choice from each group proved sufficent to find a feasible solution for many[30] different benchmark problems, but of course I knew it was just luck. So I came up with the problem presented above which sorts like this: R1 R2 R3 RESOUCES : 8 8 8 GROUPS: G1: 12.0 1 1 3 < select this 11.0 3 2 2 G2: 20.0 1 1 3 < select this 5.0 2 3 2 G3: 30.0 1 1 3 < select this 10.0 2 2 3 And it is not feasible because the resources consmption is R1:3, R2:3, R3:9. The easy solution is to pick one of the second best choices in group 1 or 2, so I'll need some kind of iteration (local search[?]) to find the starting feasible solution for my local search solution. Here are the options I came up with Option 1: iterate choices I tried to find a way to iterate all the choices with a specific order, something like G1 G2 G3 1 1 1 2 1 1 1 2 1 1 1 2 2 2 1 ... believeng that feasible solutions won't be that far away from the unfeasible one I start with and thus the number of iterations will keep quite low. Does this make any sense? If yes, how can I iterate the choices (grouped combinations) of each group keeping "as near as possibile" to the previous iteration? Option 2: Change the comparation term I tried to think how to find a better variable to sort the choices on. I thought at a measure of how "precious" a resource is based on supply and demand, so that an higer demand of a more precious resource will push you down the list, but this didn't help at all. Also I thought there probably isn't gonna be such a comparsion variable which assures me a feasible solution at first strike. I there such a variable? If not, is there a better sorting criteria anyways? Option 3: implement any known sub-optimal fast solving algorithm Unfortunately I could not find any of such algorithms online. Any suggestion?

    Read the article

  • The Collatz Sequence problem

    - by Gandalf StormCrow
    I'm trying to solve this problem, its not a homework question, its just code I'm submitting to uva.onlinejudge.org so I can learn better java trough examples. Here is the problem sample input : 3 100 34 100 75 250 27 2147483647 101 304 101 303 -1 -1 Here is simple output : Case 1: A = 3, limit = 100, number of terms = 8 Case 2: A = 34, limit = 100, number of terms = 14 Case 3: A = 75, limit = 250, number of terms = 3 Case 4: A = 27, limit = 2147483647, number of terms = 112 Case 5: A = 101, limit = 304, number of terms = 26 Case 6: A = 101, limit = 303, number of terms = 1 The thing is this has to execute within 3sec time interval otherwise your question won't be accepted as solution, here is with what I've come up so far, its working as it should just the execution time is not within 3 seconds, here is code : import java.util.Scanner; class Main { public static void main(String[] args) { Scanner stdin = new Scanner(System.in); int start; int limit; int terms; int a = 0; while (stdin.hasNext()) { start = stdin.nextInt(); limit = stdin.nextInt(); if (start > 0) { terms = getLength(start, limit); a++; } else { break; } System.out.println("Case "+a+": A = "+start+", limit = "+limit+", number of terms = "+terms); } } public static int getLength(int x, int y) { int length = 1; while (x != 1) { if (x <= y) { if ( x % 2 == 0) { x = x / 2; length++; }else{ x = x * 3 + 1; length++; } } else { length--; break; } } return length; } } And yes here is how its meant to be solved : An algorithm given by Lothar Collatz produces sequences of integers, and is described as follows: Step 1: Choose an arbitrary positive integer A as the first item in the sequence. Step 2: If A = 1 then stop. Step 3: If A is even, then replace A by A / 2 and go to step 2. Step 4: If A is odd, then replace A by 3 * A + 1 and go to step 2. And yes my question is how can I make it work inside 3 seconds time interval?

    Read the article

  • Is it possible to limit outside connections to a subdomain with .htaccess or similar?

    - by digidave0205
    I host a web application. This application serves static html pages that are refreshed at various intervals. Some as often as every 30 secs. At this time I have about 300 unique pages that are accessed via 300 unique subdomains. Some clients have at most 50 visitors to their unique page and it refreshes every 30 secs, no problem. Other clients have up to 1000 or more visitors to their page. These clients are killing my server. There was no predefined limit upon signup but now I have to impose such a limit to remain afloat financially. I would like to define a finite number of connections allowed for each individual subdomain in my hosting account. Connections attempted out of range of this finite value would either be rejected or redirected. I have access to .htaccess and php.ini. Is something of this nature possible? Oh, I have a dedicated/managed server at 1and1.

    Read the article

  • Most efficient way to LIMIT results in a JOIN?

    - by johnnietheblack
    I have a fairly simple one-to-many type join in a MySQL query. In this case, I'd like to LIMIT my results by the left table. For example, let's say I have an accounts table and a comments table, and I'd like to pull 100 rows from accounts and all the associated comments rows for each. Thy only way I can think to do this is with a sub-select in in the FROM clause instead of simply selecting FROM accounts. Here is my current idea: SELECT a.*, c.* FROM (SELECT * FROM accounts LIMIT 100) a LEFT JOIN `comments` c on c.account_id = a.id ORDER BY a.id However, whenever I need to do a sub-select of some sort, my intermediate level SQL knowledge feels like it's doing something wrong. Is there a more efficient, or faster, way to do this, or is this pretty good? By the way... This might be the absolute simplest way to do this, which I'm okay with as an answer. I'm simply trying to figure out if there IS another way to do this that could potentially compete with the above statement in terms of speed.

    Read the article

  • float** allocation limit + serialized struct problem. Need advice!

    - by jmgunn
    basically im getting an allocation limit error/warning when i create a float** array. the function i am calling to fill the float** retrieves data from a struct loaded from a file. The function works fine when i use one object but when i load 2 objects into memory i get the limit error. I am pretty sure this is to do with byte alignment or a similar thing because my struct is saved with a float** member which i am sure you are not susposed to do !?! Please confirm this! The next question i have now is how to save/serialize the float** member of this struct? I cant really afford to put an upper bound on the array ie "float [10000][3]" because i need/want to use this structure as a base for many other types of objects that may have well under the upper bound. Stroking my chin here! Any help/advice will recieve my highest gratitude. BTW these said struct objects will be used in a game/graphics package, the float** is a float[3] array for storing vertices in a model. Much thanks in advance

    Read the article

  • How to rate-limit concurrent sessions with nginx or haproxy?

    - by bantic
    I'm currently using nginx to reverse-proxy requests from web clients that are doing long-polling to an upstream. Since we're doing long polling (as opposed to websockets), when a client connects it will make multiple http connections to the server in serial, re-establishing a connection every time the server sends it some data (or timing out and re-establishing if the server has nothing to say for 10 seconds). What I'd like to do is limit the number of concurrent web clients. Since the clients are constantly making new HTTP requests instead of keeping a single request open, it's a little tricky to count the total number of web clients (because it's not the same as total number of concurrently connected http clients). The method I've come up with is to track http requests by the originating IP address, and store the IP address somewhere with a TTL of 20 seconds. If a request comes in whose IP isn't recognized, then we check the total number of unexpired stored IP addresses; if that's less than the maximum then we allow this request through. And if a request comes in with an IP address that we can find in the look-up table that hasn't yet expired, then it is allowed through as well. All requests that are allowed through have their IPs added to the table (if not there before) and the TTL refreshed to 20 seconds again. I had actually whipped something together that worked correctly this way using nginx along with the Redis 2.0 Nginx Module (and the nginx lua module to simplify the conditional branching), using redis to store my IP addresses with a TTL (the SETEX command), and checking the table size with the DBSIZE command. This worked but the performance was horrible. nginx and redis ended up using lots of cpu and the machine could only handle a very small number of concurrent requests. The new stick-table and tracking counters that were added to Haproxy in version 1.5 (via a commission from serverfault) seem like they might be ideal to implement exactly this sort of rate limiting, because the stick-table can track IP addresses and automatically expire entries. However, I don't see an easy way to get a total count of the unexpired entries in the stick table, which would be necessary to know the number of connected web clients. I'm curious if anyone has any suggestions, for nginx or haproxy or even for something else not mentioned here that I haven't thought of yet.

    Read the article

  • Print directly to CUPS server from non-local clients (Ubuntu 14.04)

    - by OEP
    I set up a CUPS server with a few queues and printing from local clients (the CUPS test page and Samba) seems to work just fine. It seems like the CUPS server is denying non-local clients though: 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 390 Validate-Job successful-ok 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 339 Create-Job client-error-not-authorized localhost - - [03/Jun/2014:14:40:50 -0400] "POST /printers/m137 HTTP/1.1" 200 410869 Print-Job successful-ok This makes me think I have some sort of host-based restriction in my configuration file, but I can't find it. I've even set my default policy to Allow all only to get the same log message. I'm working from a configuration file which had previously worked on an older version of CUPS, which looks quite similar to the example cupsd.conf. I could be wrong but it looks like that final <Limit All> block ought to allow the actions the logs complain about. MaxLogSize 2000000000 # Log general information in error_log - change "info" to "debug" for # troubleshooting... LogLevel info #AccessLog syslog #ErrorLog syslog #PageLog syslog # Administrator user group... SystemGroup sys root lp # Only listen for connections from the local machine. Listen 0.0.0.0:631 Listen :::631 Listen /var/run/cups/cups.sock ServerName <snipped> # Show shared printers on the local network. Browsing Off BrowseOrder allow,deny # (Change '@LOCAL' to 'ALL' if using directed broadcasts from another subnet.) BrowseAllow @LOCAL # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order allow,deny Allow all </Location> # Restrict access to the admin pages... <Location /admin> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Set the default printer/job policies... <Policy default> # Job-related operations must be done by the owner or an administrator... <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order allow,deny </Limit> </Policy>

    Read the article

  • Need a set based solution to group rows

    - by KM
    I need to group a set of rows based on the Category column, and also limit the combined rows based on the SUM(Number) column to be less than or equal to the @Limit value. For each distinct Category column I need to identify "buckets" that are <=@limit. If the SUM(Number) of all the rows for a Category column are <=@Limit then there will be only 1 bucket for that Category value (like 'CCCC' in the sample data). However if the SUM(Number)@limit, then there will be multiple bucket rows for that Category value (like 'AAAA' in the sample data), and each bucket must be <=@Limit. There can be as many buckets as necessary. Also, look at Category value 'DDDD', its one row is greater than @Limit all by itself, and gets split into two rows in the result set. Given this simplified data: DECLARE @Detail table (DetailID int primary key, Category char(4), Number int) SET NOCOUNT ON INSERT @Detail VALUES ( 1, 'AAAA',100) INSERT @Detail VALUES ( 2, 'AAAA', 50) INSERT @Detail VALUES ( 3, 'AAAA',300) INSERT @Detail VALUES ( 4, 'AAAA',200) INSERT @Detail VALUES ( 5, 'BBBB',500) INSERT @Detail VALUES ( 6, 'CCCC',200) INSERT @Detail VALUES ( 7, 'CCCC',100) INSERT @Detail VALUES ( 8, 'CCCC', 50) INSERT @Detail VALUES ( 9, 'DDDD',800) INSERT @Detail VALUES (10, 'EEEE',100) SET NOCOUNT OFF DECLARE @Limit int SET @Limit=500 I need one of these result set: DetailID Bucket | DetailID Category Bucket -------- ------ | -------- -------- ------ 1 1 | 1 'AAAA' 1 2 1 | 2 'AAAA' 1 3 1 | 3 'AAAA' 1 4 2 | 4 'AAAA' 2 5 3 OR 5 'BBBB' 1 6 4 | 6 'CCCC' 1 7 4 | 7 'CCCC' 1 8 4 | 8 'CCCC' 1 9 5 | 9 'DDDD' 1 9 6 | 9 'DDDD' 2 10 7 | 10 'EEEE' 1

    Read the article

  • how to make this in python

    - by user2980882
    The number reduction game Rules of the game: ? The first player to write a 0 wins. ? To start the game, Player 1 picks any whole number greater than 1, say 18. ? The players take turns reducing the number by either: o Subtracting 1 from the number his/her opponent just wrote, OR o Halving the number his/her opponent just wrote, rounding down if necessary. Write a Python program that lets two players play the number reduction game. Your program should: 1. Ask Player 1 to enter the starting number. 2. Use a while-loop to allow the players to take turns reducing the number until someone wins. 3. Each time a player enters a positive number (not 0), inform the other player what his/her choices are and ask him/her to enter the next number. 4. Declare the winner when someone enters 0. Example session: Player 1, enter a number greater than 1: 16 Player 2, your choices are 15 or 8: 15 Player 1, your choices are 14 or 7: 7 Player 2, your choices are 6 or 3:3 Player 1, your choices are 2 or 1:2 Player 2, your choices are 1 or 1:1 Player 1, your choices are 0 or 0:0 Player 1 wins

    Read the article

  • Can aynone tell me what exactly 32bit/64bit and x86 and AMD64 have to do with the different choices on the download site of Ubuntu?

    - by Elysium
    SORRY for the newbie question, but I am still learning. I am wondering if x86 simply refers to the intel CPUs and AMD64 simply refers to the AMD CPUs? While....32bit and 64bit to the processor types? I was trying to download the intel 64bit version of the Ubuntu installer, but it wont give the option of x86 and AMD64....only the 32bit and 64bit can be chosen. Does this mean that someone with an Intel CPU and another person with an AMD CPU will download the same install file? (obviously depending on the bit version of their CPUs this might differ....but that's another thing).

    Read the article

  • is there a size limit to a text file?

    - by chicane
    HI All I am creating a log file for our website which will log every log-in by all the users to our orders area. I wish to know if you think its good to enter this log info in just one single file or should this be split up once the log file hits a certain size? My concern is that this file will get rather large over time, but im not certain of the size limit of a text file? thank you

    Read the article

  • Why doesn't nHibernate support LIMIT when doing "join fetch"?

    - by HeavyWave
    In nHibernate, if you do an HQL query with "join fetch" to eagerly load a child collection, nHibernate will ignore SetMaxResults and SetFirstResult and try to retrieve every single item from the database. Why? This behavior is specific to HQL, as ICriteria supports eager loading (with an outer join, though) and LIMIT. There are more details here http://www.lesnikowski.com/blog/index.php/nhibernate-ignores-setmaxresults-in-sql/ .

    Read the article

  • How can I limit a wordpress meta_box to a single page?

    - by farinspace
    I need a way to limit the meta box to a single page (ID=84) ... if I do the following it works, but sbumit data does not go through and data is not saved ... add_action('admin_init','violin_init'); function violin_init() { if ($_GET['post'] == '84') { wp_enqueue_style('violin_admin_css', VIOLIN_THEME_PATH . '/custom/meta.css'); add_meta_box('violin_options_meta', 'Highlight Content', 'violin_options_meta', 'page', 'normal', 'high'); add_action('save_post','violin_save_meta'); } }

    Read the article

  • PHP Memory limit problem while creating xml of magento products..

    - by Jitendra
    Hello Masters, Thanks in advance, I need help in solving php memory problem, I have created a script in php that automatically fetch magento product data,the problem is that when there is large number of product in database, the script gives memory fatal error i have changed the memory limit to 256M in my php.ini but still the script not executing totally. i have checked the script its working fine if there is number of product is not too much but if there is larger number my script not working.. Please help... -Thanks Jitendra Dhobi

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >