Search Results

Search found 6172 results on 247 pages for 'limit choices to'.

Page 215/247 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • Bash PATH: How long is too long?

    - by ajwood
    Hi, I'm currently designing a software quarantine pattern to use on Ubuntu. I'm not sure how standard "quarantine" is in this context, so here is what I hope to accomplish... Inside a particular quarantine is all of the stuff one needs to run an application (bin, share, lib, etc.). Ideally, the quarantine has no leaks, which means it's not relying on any code outside of itself on the system. A quarantine can be defined as a set of executables (and some environment settings needed to make them run). I think it will be beneficial to separate the built packages enough such that upgrading to a newer version of the quarantine won't require rebuilding the whole thing. I'll be able to update just a few packages, and then the new quarantine can use some of old parts and some of the new parts. One issue I'm wondering about is the environment variables I'll be setting up to use a particular quarantines. Is there a hard limit on how big PATH can be? (either in number of characters, or in the number of directories it contains) Might a path be so long that it affects performance? Thanks very much, Andrew p.s. Any other wisdom that might help my design would be greatly appreciated :)

    Read the article

  • WCF data services - Limiting related objects returned based on critera

    - by Mike Morley
    I have an object graph consisting of a base employee object, and a set of related message objects. I am able to return the employee objects based on search criteria on the employee properties (eg team) etc. However, if I expand on the messages, I get the full collection of messages back. I would like to be able to either take the top n messages (i.e. restrict to 10 most recent) or ideally use a date range on the message objects to limit how many are brought back. So far I have not been able to figure out a way of doing this: I get an error if I attempt to filter on properties on the message (&$filter=employee/message/StartDate gives an error "No property 'StartDate' exists in type 'System.Data.Objects.DataClasses.EntityCollection`1). Attempting to use Top on the message related object doesn't work either. I have also tried using a WebGet extension that takes a string list of employee IDs. That works until the list gets too long, and then fails due to the URL getting too long (it might be possible to setup a paging mechanism on this approach)... Unfortunately the UI control I am using requires the data to be in a fairly specific hierarchical shape, so I can't easily come at this from starting on the message side and working backwards. Outside of making multiple calls does anyone know of a method to accomplish this with wcf data services? Thanks! M.

    Read the article

  • Website. VoteUp or VoteDown Videos. How to restrict users voting multiple times?

    - by DJDonaL3000
    Im working on a website (html,css,javascript, ajax, php,mysql), and I want to restrict the number of times a particular user votes for a particular video. Its similar to the YouTube system where you can voteUp or voteDown a particular video. Each vote involves adding a row to the video.votes table, which logs the time, vote direction(up or down), the client IPaddress( using PHP: $ip = $_SERVER['REMOTE_ADDR']; ), and of course the ID of the video in question. Adding votes is as simple as; (pseudocode): Javascript:onClick( vote( a,b,c,d ) ), which passes variables to PHP insertion script via ajax, and finally we replace the voteing buttons with a "Thank You For Voting" message. THE PROBLEM: If you reload/refresh the page after voting, you can vote again, and again, and again, you get the point. MY QUESTION: How do you limit the amount of times a particular user votes for a particular video?? MY THOUGHTS: Do you use cookies, and add a new cookie with the id of the video. And check for a cookie before you insert a new vote.? OR Before you insert the vote, do you use the IPaddress and the videoID to see if this same user(IP) has voted for this same video(vidID) in the past 24hrs(mktime), and either allow or dissallow the voteInsertion based on this query? OR Do you just not care? Take the assumption that most users are sane, and have better things to do than refresh pages and vote repeatedly.?? Any suggestions or ideas welcome.

    Read the article

  • Incrementing value by one over a lot of rows

    - by Andy Gee
    Edit: I think the answer to my question lies in the ability to set user defined variables in MySQL through PHP - the answer by Multifarious has pointed me in this direction Currently I have a script to cycle over 10M records, it's very slow and it goes like this: I first get a block of 1000 results in an array similar to this: $matches[] = array('quality_rank'=>46732, 'db_id'=>5532); $matches[] = array('quality_rank'=>12324, 'db_id'=>1234); $matches[] = array('quality_rank'=>45235, 'db_id'=>8345); $matches[] = array('quality_rank'=>75543, 'db_id'=>2562); I then cycle through them one by one and update the record $mult = count($matches)*2; foreach($matches as $m) { $rank++; $score = (($m[quality_rank] + $rank)/($mult))*100; $s = "UPDATE `packages_sorted` SET `price_rank` = '".$rank."', `deal_score` = '".$score."' WHERE `db_id` = '".$m[db_id]."' LIMIT 1"; } It seems like this is a very slow way of doing it but I can't find another way to increment the field price_rank by one each time. Can anyone suggest a better method. Note: Although I wouldn't usually store this kind of value in a database I really do need on this occasion for comparison search queries later on in the project. Any help would be kindly appreciated :)

    Read the article

  • WCF with MANY database connections

    - by Jorge Dominguez
    I'm working in the development of an ERP type .Net WinForms application consuming a WCF service. It's to be used by many small companies (in the range of 100-200). Database is SQL Server 2008 and the service will be hosted as a Windows service. Even thought there will be a single DB Server, our customer insists in having separate databases for each company. That is because of stability/support concerns (like DB being damaged or took offline for some reason thus affecting all clients). Concerns coming from previous experiences (not necessarily with same platform). With a single database, connections to the DB would be opened at service start up and pooling used, but, I'm not sure how connections could be managed in a multiple DB scenario: Could a connection to the corresponding DB be opened and closed for each service request? would performance be acceptable? If a connection is opened and maintained for each company accessing the system, what's the practical limit of opened connections (to different databases)? It would be very interesting to hear your opinions and suggestions for this situation. Tanks

    Read the article

  • Why doesn't this PHP execute?

    - by cam
    I copied the code from this site exactly: http://davidwalsh.name/web-service-php-mysql-xml-json as follows, /* require the user as the parameter */ if(isset($_GET['user']) && intval($_GET['user'])) { /* soak in the passed variable or set our own */ $number_of_posts = isset($_GET['num']) ? intval($_GET['num']) : 10; //10 is the default $format = strtolower($_GET['format']) == 'json' ? 'json' : 'xml'; //xml is the default $user_id = intval($_GET['user']); //no default /* connect to the db */ $link = mysql_connect('localhost','username','password') or die('Cannot connect to the DB'); mysql_select_db('db_name',$link) or die('Cannot select the DB'); /* grab the posts from the db */ $query = "SELECT post_title, guid FROM wp_posts WHERE post_author = $user_id AND post_status = 'publish' ORDER BY ID DESC LIMIT $number_of_posts"; $result = mysql_query($query,$link) or die('Errant query: '.$query); /* create one master array of the records */ $posts = array(); if(mysql_num_rows($result)) { while($post = mysql_fetch_assoc($result)) { $posts[] = array('post'=>$post); } } /* output in necessary format */ if($format == 'json') { header('Content-type: application/json'); echo json_encode(array('posts'=>$posts)); } else { header('Content-type: text/xml'); echo '<posts>'; foreach($posts as $index => $post) { if(is_array($post)) { foreach($post as $key => $value) { echo '<',$key,'>'; if(is_array($value)) { foreach($value as $tag => $val) { echo '<',$tag,'>',htmlentities($val),'</',$tag,'>'; } } echo '</',$key,'>'; } } } echo '</posts>'; } /* disconnect from the db */ @mysql_close($link); } And the php doesn't execute, it just displays as plain text. What's the dealio? The host supports PHP, I use it to run a Wordpress blog and other things.

    Read the article

  • How to keep windows from paging block of memory

    - by photo_tom
    We are working on a Vista/Windows 7 applicaiton that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this. I've done a fair amount of research and cannot find documenation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me? I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity. Suggestions?

    Read the article

  • Rapid taps on an OpenGL ES app introducing input delay

    - by Tim R.
    I am starting out writing a 2D game in OpenGL ES, and I have encountered an odd problem: if I rapidly tap the touchscreen, the input starts lagging behind the display. The more times I tap, the more delay it causes between the input and any indication of that input onscreen. It only happens if I intentionally tap very rapidly, but not from tapping and dragging with any number of fingers. What could be causing this? Excessive details follow: Both accelerometer input and taps are delayed by just tapping. The only events I am responding to are touchesBegan (below) in my EAGLView and accelerometer:(UIAccelerometer *)accelerometer didAccelerate:(UIAcceleration *)acceleration in my Game object. There doesn't seem to be any upper limit to the amount of delay: I've gotten up to 12 seconds of delay by tapping rapidly with five fingers. I have not seen any drops in framerate (it stays constantly at 60 fps) in the OpenGL ES tool in Instruments or by taking 1/the time between updates. Possibly relevant code: - (void) drawView:(id) sender { [game update:allTouches]; [renderer render:game]; } -(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { allTouches = [event allTouches]; } allTouches is a pointer that gets passed to my Game every update, which passes it to each GameObject in their update methods.

    Read the article

  • How do I set the LRECL in C#.NET?

    - by donde
    I have been trying to ftp a dtl file from .net to, what I beleive, is an AS400. The error being reported back to me is: "One or more lines have been truncated" and the admin is saying the file is coming over with 256 lines that have variable length columns. I found this explanation online: we have to establish defaults because no specifics about the file exist. The default RECFM is V and LRECL is 256. This means that SAS will scan the input record looking for the CR & LF to tell us that we are at the EOR. If the marker isn't found within the limit of the LRECL, SAS discards the data from the LRECL value to the end of the record and adds a message to the Log that "One or more lines have been truncated". So I need to set the LRECL. How do I do this in C# .NET? FtpWebRequest ftp = (FtpWebRequest)FtpWebRequest.Create(ftpfullpath); ftp.Credentials = new NetworkCredential(user, pwd); ftp.KeepAlive = false; ftp.UseBinary = false; ftp.Method = WebRequestMethods.Ftp.UploadFile; FileStream fs = File.OpenRead(inputfilepath + ftpfileName); byte[] buffer = new byte[fs.Length]; fs.Read(buffer, 0, buffer.Length); fs.Close(); Stream ftpstream = ftp.GetRequestStream(); int i = 0; int intBlock = 1786; int intBuffLeft = buffer.Length; while (i < buffer.Length) { if (intBuffLeft >= 1786) { ftpstream.Write(buffer, i, intBlock); } else { ftpstream.Write(buffer, i, intBuffLeft); } i += intBlock; intBuffLeft -= 1786; } ftpstream.Close();

    Read the article

  • codeigniter active record and mysql

    - by sea_1987
    I am running a query with Active Record in a modal of my codeigniter application, the query looks like this, public function selectAllJobs() { $this->db->select('*') ->from('job_listing') ->join('job_listing_has_employer_details', 'job_listing_has_employer_details.employer_details_id = job_listing.id', 'left'); //->join('employer_details', 'employer_details.users_id = job_listing_has_employer_details.employer_details_id'); $query = $this->db->get(); return $query->result_array(); } This returns an array that looks like this, [0]=> array(13) { ["id"]=> string(1) "1" ["job_titles_id"]=> string(1) "1" ["location"]=> string(12) "Huddersfield" ["location_postcode"]=> string(7) "HD3 4AG" ["basic_salary"]=> string(19) "£20,000 - £25,000" ["bonus"]=> string(12) "php, html, j" ["benefits"]=> string(11) "Compnay Car" ["key_skills"]=> string(1) "1" ["retrain_position"]=> string(3) "YES" ["summary"]=> string(73) "Lorem Ipsum is simply dummy text of the printing and typesetting industry" ["description"]=> string(73) "Lorem Ipsum is simply dummy text of the printing and typesetting industry" ["job_listing_id"]=> NULL ["employer_details_id"]=> NULL } } The job_listing_id and employer_details_id return as NULL however if I run the SQL in phpmyadmin I get full set of results, the query i running in phpmyadmin is, SELECT * FROM ( `job_listing` ) LEFT JOIN `job_listing_has_employer_details` ON `job_listing_has_employer_details`.`employer_details_id` LIMIT 0 , 30 Is there a reason why I am getting differing results?

    Read the article

  • Getting directory's permissions via FTP

    - by Gotys
    I am trying to get permissions of a directory via FTP command "STAT" like this: $directory_list = ftp_raw($conn_id,'STAT '.$path); The above command lists entire directory contents , including files and subdirs. I then search the returned data array for the directory i need to check, and retrieve something like: drwxr-xr-x 3 user group 77824 May 13 10:15 Targetdir This will let me parse the drwxr-xr-x string to find out that the chmod of the Targetdir is 0755. Problem is, when the containing directory has 5000 files. A) It takes a very long time, and B) the ftp_raw function just returns empty array 1 in 10 runs. I don't know if it's timing out or what exactely is the problem. Is there are a better way to find directory's permissions ? Is there a way to limit number of retrieved fiels in "STAT" command ? I really need just the top 5 , no need for the other 4995 files. Does anyone know, why would my command NOT run 100% of the time? Why would it break ? I cannot even reproduce my error, it happens randomly.

    Read the article

  • Oracle User definied aggregate function for varray of varchar

    - by baju
    I am trying to write some aggregate function for the varray and I get this error code when I'm trying to use it with data from the DB: ORA-00600 internal error code, arguments: [kodpunp1], [], [], [], [], [], [], [], [], [], [], [] [koxsihread1], [0], [3989], [45778], [], [], [], [], [], [], [], [] Code of the function is really simple(in fact it does nothing ): create or replace TYPE "TEST_VECTOR" as varray(10) of varchar(20) ALTER TYPE "TEST_VECTOR" MODIFY LIMIT 4000 CASCADE create or replace type Test as object( lastVector TEST_VECTOR, STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number, MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number, MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number, MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number ); create or replace type body Test is STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number is begin sctx := Test(TEST_VECTOR()); return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number is begin self.lastVector := value; return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number is begin return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number is begin returnValue := self.lastVector; return ODCIConst.Success; end; end; create or replace FUNCTION test_fn (input TEST_VECTOR) RETURN TEST_VECTOR PARALLEL_ENABLE AGGREGATE USING Test; Next I create some test data: create table t1_test_table( t1_id number not null, t1_value TEST_VECTOR not null, Constraint PRIMARY_KEY_1 PRIMARY KEY (t1_id) ) Next step is to put some data to the table insert into t1_test_table (t1_id,t1_value) values (1,TEST_VECTOR('x','y','z')) Now everything is prepared to perform queries: Select test_fn(TEST_VECTOR('y','x')) from dual Query above work well Select test_fn(t1_value) from t1_test_table where t1_id = 1 Version of Oracle DBMS I use: 11.2.0.3.0 Does anyone tried do such a thing? What can be the reason that it does not work? How to solve it? Thanks in advance for help.

    Read the article

  • Elo rating system: start value

    - by Marco W.
    I've implemented an Elo rating system in a game. There is no limit for the number players. Players can join the game constantly so the number of players probably rises gradually. How the Elo values are exactly calculated isn't important because of this fact: If team A beats team B then A's Elo win equals B's Elo loss. Hence I've got a problem concerning the starting values for my rating system: Should I use the starting value "0" for every player? The sum of all Elo values would be constant. But since the number of players is increasing there would be some kind of Elo deflation, wouldn't it? Should I use any starting value greater than 0? In this case, the sum of all Elo values would constantly increase. So there could be an Elo inflation. The problem: Elo points lose value but the starting value keeps always the same. What should I do? Can you help me? Thanks in advance!

    Read the article

  • Recommendations for IPC between parent and child processes in .NET?

    - by Jeremy
    My .NET program needs to run an algorithm that makes heavy use of 3rd party libraries (32-bit), most of which are unmanaged code. I want to drive the CPU as hard as I can, so the code runs several threads in parallel to divide up the work. I find that running all these threads simultaneously results in temporary memory spikes, causing the process' virtual memory size to approach the 2 GB limit. This memory is released back pretty quickly, but occasionally if enough threads enter the wrong sections of code at once, the process crosses the "red line" and either the unmanaged code or the .NET code encounters an out of memory error. I can throttle back the number of threads but then my CPU usage is not as high as I would like. I am thinking of creating worker processes rather than worker threads to help avoid the out of memory errors, since doing so would give each thread of execution its own 2 GB of virtual address space (my box has lots of RAM). I am wondering what are the best/easiest methods to communicate the input and output between the processes in .NET? The file system is an obvious choice. I am used to shared memory, named pipes, and such from my UNIX background. Is there a Windows or .NET specific mechanism I should use?

    Read the article

  • WCF + Azure = Nightmare!

    - by lsb
    Hi! I've spent the prior week trying to get a secure form of WCF to work on Azure, but all to no avail! My use case is pretty simple. I want to call a WCF endpoint in the cloud and pass messages to be queued for a Worker Role. Beyond that I want to limit access to pre-authrorized users, authenticated via username & password. I've tried to get this working with Transport, TransportWithMessageCredential and Message security but nothing seems to work. Indeed, I've worked through every example and snippet that I could find, most recently the "Service using binary HTTP binding with transport security and message credentials and Silverlight client" example on the http://code.msdn.microsoft.com/wcfazure page. I'm pretty sure that I'm being knocked down by small bugs and beta changes but the end result is that I'm totally stuck. This is a critical path item for me so any suggestions would be greatly appreciated. A complete working example or a walkthrough would be even better!

    Read the article

  • What is wrong with this Fortran '77 snippet?

    - by notJim
    I've been tasked with maintaing some legacy fortran code, and I'm having trouble getting it to compile with gfortran. I've written a fair amount of Fortran 95, but this is my first experience with Fortran 77. This snippet of code is the problematic one: CHARACTER*22 IFILE, OFILE IFILE='TEST.IN' OFILE='TEST.OUT' OPEN(5,FILE=IFILE,STATUS='NEW') OPEN(6,FILE=OFILE,STATUS='NEW') common/pabcde/nfghi When I compile with gfortran file.FOR, all lines starting with the common statement are errors (e.g. Error: Unexpected COMMON statement at (1) for each following line until it hits the 25 error limit). I compiled with -Wall -pedantic, but fixing the warnings did not fix this problem. The crazy thing is that if I comment out all 4 lines starting with IF='TEST.IN', the program compiles and works as expected, but I must comment out all of them. Leaving any of them uncommented gives me the same errors starting with the common statement. If I comment out the common statement, I get the same errors, just starting on the following line. I am on OS X Leopard (not Snow Leopard) using gfortran. I've used this very system with gfortran extensively to write Fortran 95 programs, so in theory the compiler itself is sane. What the hell is going on with this code?

    Read the article

  • Converting rows to Columns in SQL

    - by Ram
    I have a table (actually a view, but simplified my example to a table) which gives me some data like this id CompanyName website 1 Google google.com 2 Google google.net 3 Google google.org 4 Google google.in 5 Google google.de 6 Microsoft Microsoft.com 7 Microsoft live.com 8 Microsoft bing.com 9 Microsoft hotmail.com I am looking to convert it to get a result like this CompanyName website1 website2 website3 website 4 website5 website6 ----------- ------------- ---------- ---------- ----------- --------- -------- Google google.com google.net google.org google.in google.de NULL Microsoft Microsoft.com live.com bing.com hotmail.com NULL NULL I have looked into pivot but looks like the record(row values) cannot be dynamic (i.e can only be certain predefined values). Also, if there are more than 6 websites, I want to limit it to the first 6 Dynamic pivot makes sense, but I would have to incorporate it into my view ?? Is there a simpler solution for this ? Here are the SQL scripts CREATE TABLE [dbo].[Company]( [id] [int] NULL, [CompanyName] [varchar](50) NULL, [website] [varchar](50) NULL ) ON [PRIMARY] GO insert into company values (1,'Google','google.com') insert into company values (2,'Google','google.net') insert into company values (3,'Google','google.org') insert into company values (4,'Google','google.in') insert into company values (5,'Google','google.de') insert into company values (6,'Microsoft','Microsoft.com') insert into company values (7,'Microsoft','live.com') insert into company values (8,'Microsoft','bing.com') insert into company values (9,'Microsoft','hotmail.com')

    Read the article

  • PHP/json My field names are being truncated to 30 characters. Can I stop this?

    - by Biff MaGriff
    Hi Everyone! Ok so I got this piece of vendor software that they said should be run on an apache php server and MySql database. I didn't have either of those so I put it on a PHP IIS server and I converted the code to work on SQL server. ex. mysql_select_db - mssql_select_db (among other things) So I have the following code in a php file $query = "SELECT * FROM TableName WHERE KEY_FIELD = '".$keyField."';"; $result = mssql_query($query); $arr = array(); while ( $obj = mssql_fetch_object($result) ) { $arr[] = $obj; } echo '{"results":'.json_encode($arr).'}'; and my results look something like this (captured with fiddler 2) {"results":[{"KEY_FIELD":"57", "My30characterlongfieldthatiscu":"GoodValue"}]} "My30characterlongfieldthatiscu" should be "My30characterlongfieldthatiscutoff" Kinda weird, no? The vendor claims that the app works perfectly on their end. I'm thinking this is some sort of IIS PHP limit, is there a way around it or can I expand it? I found this solution http://www.php.net/manual/en/ref.mssql.php#74834 but I don't understand it. Thanks!

    Read the article

  • "Too many indexes on table" error when creating relationships in Microsoft Access 2010.

    - by avianattackarmada
    I have tblUsers which has a primary key of UserID. UserID is used as a foreign key in many tables. Within a table, it is used as a foreign key for multiple fields (e.g. ObserverID, RecorderID, CheckerID). I have successfully added relationships (with in the the MS Access 'Relationship' view), where I have table aliases to do the multiple relationships per table: *tblUser.UserID - 1 to many - tblResight.ObserverID *tblUser_1.UserID - 1 to many - tblResight.CheckerID After creating about 25 relationships with enforcement of referential integrity, when I try to add an additional one, I get the following error: "The operation failed. There are too many indexes on table 'tblUsers.' Delete some of the indexes on the table and try the operation again." I ran the code I found here and it returned that I have 6 indexes on tblUsers. I know there is a limit of 32 indexes per table. Am I using the relationship GUI wrong? Does access create an index for the enforcement of referential integrity any time I create a relationship (especially indexes that wouldn't turn up when I ran the script)? I'm kind of baffled, any help would be appreciated.

    Read the article

  • Join with Three Tables

    - by John
    Hello, In MySQL, I am using the following three tables (their fields are listed after their titles): comment: commentid loginid submissionid comment datecommented login: loginid username password email actcode disabled activated created points submission: submissionid loginid title url displayurl datesubmitted I would like to display "datecommented" and "comment" for a given "username," where "username" equals a variable called $profile. I would also like to make the "comment" a hyperlink to http://www...com/.../comments/index.php?submission='.$rowc["title"].'&submissionid='.$rowc["submissionid"].'&url='.$rowc["url"].'&countcomments='.$rowc["countComments"].'&submittor='.$rowc["username"].'&submissiondate='.$rowc["datesubmitted"].'&dispurl='.$rowc["displayurl"].' where countComments equals COUNT(c.commentid) and $rowc is part of the query listed below. I tried using the code below to do what I want, but it didn't work. How could I change it to make it do what I want? Thanks in advance, John $sqlStrc = "SELECT s.loginid ,s.title ,s.url ,s.displayurl ,s.datesubmitted ,l.username ,l.loginid ,s.title ,s.submissionid ,c.comment ,c.datecommented ,COUNT(c.commentid) countComments FROM submission s INNER JOIN login l ON s.loginid = l.loginid LEFT OUTER JOIN comment c ON c.loginid = l.loginid WHERE l.username = '$profile' GROUP BY c.loginid ORDER BY s.datecommented DESC LIMIT 10"; $resultc = mysql_query($sqlStrc); $arrc = array(); echo "<table class=\"samplesrec1c\">"; while ($rowc = mysql_fetch_array($resultc)) { $dtc = new DateTime($rowc["datecommented"], $tzFromc); $dtc->setTimezone($tzToc); echo '<tr>'; echo '<td class="sitename3c">'.$dtc->format('F j, Y &\nb\sp &\nb\sp g:i a').'</a></td>'; echo '<td class="sitename1c"><a href="http://www...com/.../comments/index.php?submission='.$rowc["title"].'&submissionid='.$rowc["submissionid"].'&url='.$rowc["url"].'&countcomments='.$rowc["countComments"].'&submittor='.$rowc["username"].'&submissiondate='.$rowc["datesubmitted"].'&dispurl='.$rowc["displayurl"].'">'.stripslashes($rowc["comment"]).'</a></td>'; echo '</tr>'; } echo "</table>";

    Read the article

  • Why Is Apache Giving 403?

    - by ThinkCL
    I am getting 403 Errors from Apache when I send too many, 12, synchronous HTTP Posts via a desktop app I am building in XCode / Objective-C. The 12 POST requests are just a few kb each and go out instantly one after the other and the Apache Error Log shows... client denied by server configuration: /the-path/the-file.php Apache 2.0 PHP 5 and I have this same setup working fine on my local machine. The error is coming from a VPS with my host, which runs very fast and smooth and has plenty of resources. To debug I threw a sleep(1); function (stalls script execution by 1 second) into the php file and that fixed it. This makes me think that I am breaking some limit for too many requests for a single IP in a certain amount of time. I have googled and combed PHP ini and Apache configs, but I cannot find what that directive/setting might be. I should mention that the although it varies the first 4 or 5 POSTS usually work then it starts returning the 403 error intermittently after that. Just really acting like its bogging down. Any ideas?

    Read the article

  • When is Googling it wrong?

    - by Drahcir
    I've been going through Stack Overflow for quite a bit now and noticed certain people (usually experienced programmers) frown upon Googling (researching) certain problems. Since I myself tend to use Google quite a bit to solve certain programming related issues I found certain comments rather demoralising. Now some of you may have come here trigger happy to delete this post but I needed some clarification. I usually Google things that usually syntax related that I would have never figured out on my own. For example I once wondered how to access the properties of a class that I didn't have a direct relationship to. So after a bit of research I discovered reflection and got what I wanted. Now in another scenario is learning a new language, in my case Silverlight were it differs in certain aspects of .NET compared to say ASP.NET. A few weeks ago I had no idea how to load another Silverlight page (usercontrol) and had to Google my way to the solution which I found wasn't as simple as I had imagined. In scenario three is were I myself frown up, that is just stealing a huge chunk of code to avoid doing the work yourself, for example paging a HTML table using JavaScript, where one just copies and pastes the JavasSript code without as much as trying to understand how it works. I do admit I have done this once or twice before for trivial tasks that had very little time limit and weren't all that important but most of the time still have to throw away what I found because it took too much time to adapt it and get what I wanted out of it. In the last scenario, I sometimes have a piece of code that I would be really unhappy about, as in I find it sloppy or too overcomplicated and try to look on the Internet to see other ways to tackle the same problem, let's say filtering through a table. With the knowledge I acquire I learned new coding practices that help me work more efficiently like "Do not repeat yourself" and such. Now in your opinion when do you find it wrong to use Google (or any other researching tool) to find a solution to your problem?

    Read the article

  • RequestBuilder timeouts and browser connection limits per domain.

    - by WesleyJohnson
    This is specifically about GWT's RequestBuilder, but should apply to general XHR as well. My company is having me build a near realtime chat application over HTTP. Yes, I do realize there are better ways to do chat aplications, but this is what they want. Eventually we want it working on the iPad/iPhone as well so flash is out, which rules out websockets and comet as well, I think? Anyway, I'm running into issues were I've set GWT's RequestBuilder timeout to 10 seconds and we get very random and sporadic timeouts. We've got error handling and emailing on the server side and never get any errors, which suggests the underlying XHR request that RequestBuilder is built on, never gets to the server and times out after 10 seconds. We're using these request to poll the server for new messages rather often and also for sending new messages to the server and also polling (less frequently) for other parts of application. What I'm afraid of is that we're running into the browsers limit on concurrent connections to the same domain (2 for IE by default?). Now my question is - If I construct a RequestBuilder and call it's send() method and the browser blocks it from sending until one of the 2 connections per domain is free, does the timeout still start while the request is being blocked or will it not start until the browser actually releases the underlying XHR? I hope that's clear, if not please let me know and I'll try to explain more.

    Read the article

  • Information about PTE's (Page Table Entries) in Windows

    - by Patrick
    In order to find more easily buffer overflows I am changing our custom memory allocator so that it allocates a full 4KB page instead of only the wanted number of bytes. Then I change the page protection and size so that if the caller writes before or after its allocated piece of memory, the application immediately crashes. Problem is that although I have enough memory, the application never starts up completely because it runs out of memory. This has two causes: since every allocation needs 4 KB, we probably reach the 2 GB limit very soon. This problem could be solved if I would make a 64-bit executable (didn't try it yet). even when I only need a few hundreds of megabytes, the allocations fail at a certain moment. The second problem is the biggest one, and I think it's related to the maximum number of PTE's (page table entries, which store information on how Virtual Memory is mapped to physical memory, and whether pages should be read-only or not) you can have in a process. My questions (or a cry-for-tips): Where can I find information about the maximum number of PTE's in a process? Is this different (higher) for 64-bit systems/applications or not? Can the number of PTE's be configured in the application or in Windows? Thanks, Patrick PS. note for those who will try to argument that you shouldn't write your own memory manager: My application is rather specific so I really want full control over memory management (can't give any more details) Last week we had a memory overwrite which we couldn't find using the standard C++ allocator and the debugging functionality of the C/C++ run time (it only said "block corrupt" minutes after the actual corruption") We also tried standard Windows utilities (like GFLAGS, ...) but they slowed down the application by a factor of 100, and couldn't find the exact position of the overwrite either We also tried the "Full Page Heap" functionality of Application Verifier, but then the application doesn't start up either (probably also running out of PTE's)

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >