Search Results

Search found 6180 results on 248 pages for 'max van der heijden'.

Page 23/248 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Group multiple media queries formed as output of LESS css

    - by Goje87
    I was planning to use LESS css in my project (PHP). I am planning to use its nested @media query feature. I find that it fails to group the multiple media queries in the output css it generates. For example: // LESS .header { @media all and (min-width: 240px) and (max-width: 319px) { font-size: 12px; } @media all and (min-width: 320px) and (max-width: 479px) { font-size: 16px; font-weight: bold; } } .body { @media all and (min-width: 240px) and (max-width: 319px) { font-size: 10px; } @media all and (min-width: 320px) and (max-width: 479px) { font-size: 12px; } } // output CSS @media all and (min-width: 240px) and (max-width: 319px) { .header { font-size: 12px; } } @media all and (min-width: 320px) and (max-width: 479px) { .header { font-size: 16px; font-weight: bold; } } @media all and (min-width: 240px) and (max-width: 319px) { .body { font-size: 10px; } } @media all and (min-width: 320px) and (max-width: 479px) { .body { font-size: 12px; } } My expected output is (@media queries grouped) @media all and (min-width: 240px) and (max-width: 319px) { .header { font-size: 12px; } .body { font-size: 10px; } } @media all and (min-width: 320px) and (max-width: 479px) { .header { font-size: 16px; font-weight: bold; } .body { font-size: 12px; } } I would like to know if it can be done in LESS it self or is there any simple CSS parser I can use to manipulate the output CSS to group the @media queries.

    Read the article

  • What is the most efficiant way to get the highest and the lowest value in a Array

    - by meo
    is there something like PHP: max() in javascript? lets say i have an array like this: [2, 3, 23, 2, 2345, 4, 86, 8, 231, 75] and i want to return the highest and the lowest value in this array. What is the fastest way to do that? i have tried: function madmax (arr) { var max = arr[0], min = arr[1] if (min > max) { max = arr[1] min = arr[0] } for (i=1;i<=arr.length;i++){ if( arr[i] > max ) { max = arr[i] }else if( arr[i] < min ) { min = arr[i] } } return max, min } madmax([123, 2, 345, 153, 5, 98, 456, 4323456, 1, 234, 19874, 238, 4]) Is there a simpler way retrieve the max and min value of a array?

    Read the article

  • c++ inline functions

    - by user69514
    i'm confused about how to do inline functions in C++.... lets say this function. how would it be turned to an inline function int maximum( int x, int y, int z ) { int max = x; if ( y > max ) max = y; if ( z > max ) max = z; return max; }

    Read the article

  • What is the most efficient way to get the highest and the lowest values in a Array

    - by meo
    is there something like PHP: max() in javascript? lets say i have an array like this: [2, 3, 23, 2, 2345, 4, 86, 8, 231, 75] and i want to return the highest and the lowest value in this array. What is the fastest way to do that? i have tried: function madmax (arr) { var max = arr[0], min = arr[1] if (min > max) { max = arr[1] min = arr[0] } for (i=1;i<=arr.length;i++){ if( arr[i] > max ) { max = arr[i] }else if( arr[i] < min ) { min = arr[i] } } return max, min } madmax([123, 2, 345, 153, 5, 98, 456, 4323456, 1, 234, 19874, 238, 4]) Is there a simpler way retrieve the max and min value of a array?

    Read the article

  • How cast in XML for aggregate functions

    - by renegm
    In SQL Server 2008. I need execute a query like that: DECLARE @x AS xml SET @x=N'<r><c>First Text</c></r><r><c>Other Text</c></r>' SELECT @x.query('fn:max(r/c)') But return nothing (apparently because convert xdt:untypedAtomic to numeric) How to "cast" r/c to varchar? Something like SELECT @x.query('fn:max(«CAST(r/c «AS varchar(20))»)') Edit: Using Nodes the function MAX is from T-SQL no fn:max function In this code: DECLARE @x xml; SET @x = ''; SELECT @x.query('fn:max((1, 2))'); SELECT @x.query('fn:max(("First Text", "Other Text"))'); both query return expected: 2 and "Other Text" fn:max can evaluate string expression ad hoc. But the first query dont work. How to force string arguments to fn:max?

    Read the article

  • Why do arrays in java choose the biggest? [closed]

    - by Trycon
    I'm new to java so I was reading my book with these code: public class mainb1 { public static void main (String[] args) { //datatype name = expression; //food int min, max; int num[] = new int[10]; num[0]=99; num[1]=90; num[2]=-100; num[3]=100; num[4]=23; num[5]=50; num[6]=123; num[7]=3123; num[8]=2; num[9]=923; min=max=num[1]; for(int x=0;x<10;x++) { if(num[x]<min)min=num[x]; if(num[x]>max)max=num[x]; } System.out.println("Min: "+min+" max: "+max); } } It chose the biggest and the smallest. I don't get it if max was 99, then the last one that is lesser than min is 2? How did this array choose to pick the smallest and the biggest? Can someone explain?

    Read the article

  • Rails 2 and Ngnix: https pages can't load css or js (but will load graphics)

    - by Max Williams
    ADMISSION: i've posted this same question on stackoverflow, before realising it's probabaly better suited to superuser, but it kind of depends on the answer: If it turns out to be a problem in my nginx config, it's definitely superuser. If it turns out to be a problem in my Rails config (or code) then it's arguably stackoverflow. I'm adding some https pages to my rails site. In order to test it locally, i'm running my site under one mongrel_rails instance (on 3000) and nginx. I've managed to get my nginx config to the point where i can actually go to the https pages, and they load. Except, the javascript and css files all fail to load: looking in the Network tab in chrome web tools, i can see that it is trying to load them via an https url. Eg, one of the non-working file urls is https://cmw-local.co.uk/stylesheets/cmw-logged-out.css?1383759216 I have these set up (or at least think i do) in my nginx config to redirect to the http versions of the static files. This seems to be working for graphics, but not for css and js files. If i click on this in the Network tab, it takes me to the above url, which redirects to the http version. So, the redirect seems to be working in some sense, but not when they're loaded by an https page. Like i say, i thought i had this covered in the second try_files directive in my config below, but maybe not. Can anyone see what i'm doing wrong? thanks, Max Here's my nginx config - sorry it's a bit lengthy! I think the error is likely to be in the first (ssl) server block: server { listen 443 ssl; keepalive_timeout 70; ssl_certificate /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.crt; ssl_certificate_key /home/max/work/charanga/elearn_container/elearn/config/nginx/certs/max-local-server.key; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; # ensure that we serve css, js, other statics when requested # as SSL, but if the files don't exist (i.e. any non /basket controller) # then redirect to the non-https version location / { try_files $uri @non-ssl-redirect; } # securely serve everything under /basket (/basket/checkout etc) # we need general too, because of the email/username checking location ~ ^/(basket|general|cmw/account/check_username_availability) { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running # other rewrite tests try_files $uri @rails-ssl; expires 1h; } location @non-ssl-redirect { return 301 http://$host$request_uri; } location @rails-ssl { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } } #upstream elrs { # server 127.0.0.1:3000; #} server { listen 80; server_name elearning.dev cmw-dev.co.uk cmw-dev.com cmw-nginx.co.uk cmw-local.co.uk; root /home/max/work/charanga/elearn_container/elearn; access_log /home/max/work/charanga/elearn_container/elearn/log/access.log; error_log /home/max/work/charanga/elearn_container/elearn/log/error.log debug; client_max_body_size 50M; index index.html index.htm; # gzip html, css & javascript, but don't gzip javascript for pre-SP2 MSIE6 (i.e. those *without* SV1 in their user-agent string) gzip on; gzip_http_version 1.1; gzip_vary on; gzip_comp_level 6; gzip_proxied any; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; #text/html # make sure gzip does not lose large gzipped js or css files # see http://blog.leetsoft.com/2007/7/25/nginx-gzip-ssl gzip_buffers 16 8k; # Disable gzip for certain browsers. #gzip_disable "MSIE [1-6].(?!.*SV1)"; gzip_disable "MSIE [1-6]"; # blank gif like it's 1995 location = /images/blank.gif { empty_gif; } # don't serve files beginning with dots location ~ /\. { access_log off; log_not_found off; deny all; } # we don't care if these are missing location = /robots.txt { log_not_found off; } location = /favicon.ico { log_not_found off; } location ~ affiliate.xml { log_not_found off; } location ~ copyright.xml { log_not_found off; } # convert urls with multiple slashes to a single / if ($request ~ /+ ) { rewrite ^(/)+(.*) /$2 break; } # X-Accel-Redirect # Don't tie up mongrels with serving the lesson zips or exes, let Nginx do it instead location /zips { internal; root /var/www/apps/e_learning_resource/shared/assets; } location /tmp { internal; root /; } location /mnt{ root /; } # resource library thumbnails should be served as usual location ~ ^/resource_library/.*/*thumbnail.jpg$ { if (!-f $request_filename) { rewrite ^(.*)$ /images/no-thumb.png break; } expires 1m; } # don't make Rails generate the dynamic routes to the dcr and swf, we'll do it here location ~ "lesson viewer.dcr" { rewrite ^(.*)$ "/assets/players/lesson viewer.dcr" break; } # we need this rule so we don't serve the older lessonviewer when the rule below is matched location = /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf { rewrite ^(.*)$ /assets/players/virgin_lesson_viewer/_cha5513/lessonViewer.swf break; } location ~ v6lessonViewer.swf { rewrite ^(.*)$ /assets/players/v6lessonViewer.swf break; } location ~ lessonViewer.swf { rewrite ^(.*)$ /assets/players/lessonViewer.swf break; } location ~ lgn111.dat { empty_gif; } # try to get autocomplete school names from memcache first, then # fallback to rails when we can't location /schools/autocomplete { set $memcached_key $uri?q=$arg_q; memcached_pass 127.0.0.1:11211; default_type text/html; error_page 404 =200 @rails; # 404 not really! Hand off to rails } location / { # make sure cached copies are revalidated once they're stale add_header Cache-Control "public, must-revalidate, proxy-revalidate"; # this serves Rails static files that exist without running other rewrite tests try_files $uri @rails; expires 1h; } location @rails { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_read_timeout 180; proxy_next_upstream off; proxy_pass http://127.0.0.1:3000; expires 0d; } }

    Read the article

  • What's the most compact way to store a password-protected RSA key?

    - by Tim
    I've tried converting a PEM-encoded key to DER format, and it appears the password is stripped regardless of the -passout argument. Example: openssl rsa -in tmp.pem -outform DER -out tmp.der -passin pass:foo -passout pass:bar -des3 The resulting key appears no longer password-protected, so I am assuming that DER format does not support a password - is that correct? What alternative way is there to store this in a compact, binary form, and keep the password-protection?

    Read the article

  • MSSQL: How to copy a file (pdf, doc, txt...) stored in a varbinary(max) field to a file in a CLR sto

    - by user193655
    I ask this question as a followup of this question. A solution that uses bcp and xp_cmdshell, that is not my desired solution, has been posted here: stackoverflow.com/questions/828749/ms-sql-server-2005-write-varbinary-to-file-system (sorry i cannot post a second hyperlink since my reputation is les than 10). I am new to c# (since I am a Delphi developer) anyway I was able to create a simple CLR stored procedures by following a tutorial. My task is to move a file from the client file system to the server file system (the server can be accessed using remote IP, so I cannot use a shared folder as destination, this is why I need a CLR stored procedure). So I plan to: 1) store from Delphi the file in a varbinary(max) column of a temporary table 2) call the CLR stored procedure to create a file at the desired path using the data contained in the varbinary(max) field Imagine I need to move C:\MyFile.pdf to Z:\MyFile.pdf, where C: is a harddrive on local system and Z: is an harddrive on the server. I provide the code below (not working) that someone can modify to make it work? Here I suppose to have a table called MyTable with two fields: ID (int) and DATA (varbinary(max)). Please note it doesn't make a difference if the table is a real temporary table or just a table where I temporarly store the data. I would appreciate if some exception handling code is there (so that I can manage an "impossible to save file" exception). I would like to be able to write a new file or overwrite the file if already existing. [Microsoft.SqlServer.Server.SqlProcedure] public static void VarbinaryToFile(int TableId) { using (SqlConnection connection = new SqlConnection("context connection=true")) { connection.Open(); SqlCommand command = new SqlCommand("select data from mytable where ID = @TableId", connection); command.Parameters.AddWithValue("@TableId", TableId); // This was the sample code I found to run a query //SqlContext.Pipe.ExecuteAndSend(command); // instead I need something like this (THIS IS META_SYNTAX!!!): SqlContext.Pipe.ResultAsStream.SaveToFile('z:\MyFile.pdf'); } } (one subquestion is: is this approach coorect or there is a way to directly pass the data to the CLR stored procedure so I don't need to use a temp table?) If the subquestion's answer is No, could you describe the approach of avoiding a temp table? So is there a better way then the one I describe above (=temp table + Stored procedure)? A way to directly pass the dataastream from the client application to the CLR stored procedure? (my files can be any size but also very big)

    Read the article

  • Are there disadvantages to using VARCHAR(MAX) in a table?

    - by Meiscooldude
    Here is my predicament. Basically, I need a column in a table to hold up an unknown length of characters. But I was curious if in Sql Server performance problems could arise using a VARCHAR(MAX) or NVARCHAR(MAX) in a column, such as: 'This time' I only need to store 3 characters and most of the time I only need to store 10 characters. But there is a small chances that It could be up to a couple thousand characters in that column, or even possibly a million, It is unpredictable. But, I can guarantee that it will not go over the 2GB limit. I was just curious if there are any performance issues, or possibly better ways of solving this problem where available.

    Read the article

  • How to select a MAX value from column in Query Builder in Kohana framework?

    - by Victor Czechov
    I need to INSERT a data to table, but before a query I must to know the MAX value from column position, than to INSERT a data WHERE my SELECTED before position+1. Is it possible with query builder? following my first comment I did query: $p = DB::select(array(DB::expr('MAX(`position`)', 'p')))->from('supercategories')->execute(); echo $p; the error: ErrorException [ Notice ]: Undefined offset: 1 MODPATH\database\classes\kohana\database.php [ 505 ] 500 */ 501 public function quote_column($column) 502 { 503 if (is_array($column)) 504 { 505 list($column, $alias) = $column; 506 } 507 508 if ($column instanceof Database_Query) 509 { 510 // Create a sub-query

    Read the article

  • Getting Out Of Memory: Java heap space, but while viewing heap space it max uses 50 MB

    - by ikky
    Hi! I'm using ASANT to run a xml file which points to a NARS.jar file. (i do not have the project file of the NARS.jar) I'm getting "java.lang.OutOfMemoryError: Java heap space. I used VisualVM to look at the heap while running the NARS.jar, and it says that it max uses 50 MB of the heapspace. I've set the initial and max size of heapspace to 512 MB. Does anyone have an ide of what could be wrong? I got 1 GB physical Memory and created a 5 GB pagefile (for test purpose). Thanks in advance.

    Read the article

  • Tomcat 5.5, Is there a max upload speed per request?

    - by maclema
    I am having an issue when uploading files to tomcat. It seems that tomcat (or something else?) will not handle the upload as fast as I can send it. When uploading multiple files concurrently I can max out my local connection upload speed (2.1MB/s). However, when uploading only one file at a time, no matter how small or large the file, the upload will max out around 400KB/s. I have tried setting the appReadBufSize higher but it makes no difference. Is there something else that would be limiting the upload speed per request? Proxy Server: CentOS 4 Apache 2 SSL Tomcat Server: CentOS 4 Tomcat 5.5.25 (Tomcat Native Library Is Installed) Java 6 Thanks! Matt

    Read the article

  • Is there a function similar to Math.Max for Entity Framework?

    - by Ryan ONeill
    I have an entity framework query as follows; From T In Db.MyTable Where (T.Col1 - T.Col2) + T.Col3 - T.Col4 > 0 _ Select T I now need to make sure that the bracketed part '(T.Col1 - T.Col2)' does not go below zero. In .Net, I'd code it as follows (but obviously EF does not like Math.Max). From T In Db.MyTable Where Math.Max(T.Col1 - T.Col2,0) + T.Col3 - T.Col4 > 0 _ Select T Is there an easy way to do this? I am using EF 2.0 (not the latest, just released version). Thanks in advance

    Read the article

  • PS: Filter selected rows with only max values as output?

    - by David.Chu.ca
    I have a variable results ($result) of several rows of data or object like this: PS> $result | ft -auto; name value ---- ----- a 1 a 2 b 30 b 20 .... what I need to get all the rows of name and max(value) like this filtered output: PS> $result | ? |ft -auto name value ---- ----- a 2 b 30 .... Not sure what command or filters available (as ? in above) so that I can get each name and only the max value for the name out?

    Read the article

  • Why do I have to set the max length of every damn text column in the database?

    - by John Leidegren
    Why is it that every RDBMS insists that you tell it what the max length of a text field is going to be... why can't it just infer this information form the data that's put into the database? I've mostly worked with MS SQL Server, but every other database I know also demands that you set these arbitrary limits on your data schema. The reality is that this is not particulay helpful or friendly to work with becuase the business requirements change all the time and almost every day some end-user is trying to put a lot of text into that column. Does any one with some inner working knowledge of a RDBMS know why we just don't infer the limits from the data that's put into the storage? I'm not talking about guessing the type information, but guessing the limits of a particular text column. I mean, there's a reason why I don't use nvarchar(max) on every text column in the database.

    Read the article

  • Windows 7 cannot burn a DVD-R at 4x or 8x? (it seems to have to use the MAX speed)

    - by Jian Lin
    Windows 7 is great to have the ability to burn an .iso into a DVD-R, but it seems like there is no way to change the speed to 8x or 4x? (so it may go "max", which is 16x) because some of my DVD drives cannot read data all the time if the disc was burned at 16x, that's why usually i limit it to 8x. also, some articles say that disc burned at 4x can last a longer time, so i am tempted to burn something at 4x sometimes. any method or hack that can make it happen?

    Read the article

  • Why such a dramatic difference in wireless router max. simultaneous connections?

    - by Jez
    Recently, I've needed to look into buying a wireless router for a mission-critical system at work that will need to support quite a few simultaneous connections (potentially a few hundred laptops). One thing I've noticed is that there seems to be a dramatic difference between the max. simultaneous connections different routers can support; see this page for example - anything from 32 to 35,000! Why is there this degree of difference? You'd have thought that if we know how to make routers that can handle thousands of connections, we wouldn't be making stuff that's limited to a pathetic 32 anymore. Is it a firmware thing? A hardware thing? Are low-end manufacturers purposely putting low arbitrary connection limits in so people can be "encouraged" to pay more for high-end routers?

    Read the article

  • SQL SERVER – Online Index Rebuilding Index Improvement in SQL Server 2012

    - by pinaldave
    Have you ever faced situation when you see something working and you feel it should not be working? Well, I had similar moments few days ago. I know that SQL Server 2008 supports online indexing. However, I also know that I cannot rebuild index ONLINE if I have used VARCHAR(MAX), NVARCHAR(MAX) or few other data types. While I held my belief very strongly I came across situation, where I had to go online and do little bit reading from Book Online. Here is the similar example. First of all – run following code in SQL Server 2008 or SQL Server 2008 R2. USE TempDB GO CREATE TABLE TestTable (ID INT, FirstCol NVARCHAR(10), SecondCol NVARCHAR(MAX)) GO CREATE CLUSTERED INDEX [IX_TestTable] ON TestTable (ID) GO CREATE NONCLUSTERED INDEX [IX_TestTable_Cols] ON TestTable (FirstCol) INCLUDE (SecondCol) GO USE [tempdb] GO ALTER INDEX [IX_TestTable_Cols] ON [dbo].[TestTable] REBUILD WITH (ONLINE = ON) GO DROP TABLE TestTable GO Now run the same code in SQL Server 2012 version. Observe the difference between both of the execution. You will be get following resultset. In SQL Server 2008/R2 it will throw following error: Msg 2725, Level 16, State 2, Line 1 An online operation cannot be performed for index ‘IX_TestTable_Cols’ because the index contains column ‘SecondCol’ of data type text, ntext, image, varchar(max), nvarchar(max), varbinary(max), xml, or large CLR type. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline. In SQL Server 2012 it will run successfully and will not throw any error. Command(s) completed successfully. I always thought it will throw an error if there is VARCHAR(MAX) or NVARCHAR(MAX) used in table schema definition. When I saw this result it was clear to me that it will be for sure not bug enhancement in SQL Server 2012. For matter for the fact, I always wanted this feature to be added in SQL Server Engine as this will enable ONLINE Index Rebuilding for mission critical tables which needs to be always online. I quickly searched online and landed on Jacob Sebastian’s blog where he has blogged about it as well. Well, is there any other new feature in SQL Server 2012 which gave you good surprise? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Dutch for once: op zoek naar een nieuwe uitdaging!

    - by Dennis Vroegop
    Originally posted on: http://geekswithblogs.net/dvroegop/archive/2013/10/11/dutch-for-once-op-zoek-naar-een-nieuwe-uitdaging.aspxI apologize to my non-dutch speaking readers: this post is about me looking for a new job and since I am based in the Netherlands I will do this in Dutch… Next time I will be technical (and thus in English) again! Het leuke van interim zijn is dat een klus een keer afloopt. Ik heb heel bewust gekozen voor het leven als freelancer: ik wil graag heel veel verschillende mensen en organisaties leren kennen. Dit werk is daar bij uitstek geschikt voor! Immers: bij iedere klus breng ik niet alleen nieuwe ideeën en kennis maar ik leer zelf ook iedere keer ontzettend veel. Die kennis kan ik dan weer gebruiken bij een vervolgklus en op die manier verspreid ik die kennis onder de bedrijven in Nederland. En er is niets leukers dan zien dat wat ik meebreng een organisatie naar een ander niveau brengt! Iedere keer een ander bedrijf zoeken houdt in dat ik iedere keer weg moet gaan bij een organisatie. Het lastige daarvan is het juiste moment te vinden. Van buitenaf gezien is dat lastig in te schatten: wanneer kan ik niets vernieuwends meer bijdragen en is het tijd om verder te gaan? Wanneer is het tijd om te zeggen dat de organisatie alles weet wat ik ze kan bijbrengen? In mijn huidige klus is dat moment nu aangebroken. In de afgelopen elf maanden heb ik dit bedrijf zien veranderen van een kleine maar enthousiaste groep ontwikkelaars naar een professionele organisatie met ruim twee keer zo veel ontwikkelaars. Dat veranderingsproces is erg leerzaam geweest en ik ben dan ook erg blij dat ik die verandering heb kunnen en mogen begeleiden. Van drie teams met ieder vijf of zes ontwikkelaars naar zes teams met zeven tot acht ontwikkelaars per team groeien betekent dat je je ontwikkelproces heel anders moet insteken. Ook houdt dat in dat je je teams anders moet indelen, dat de organisatie zelf anders gemodelleerd moet worden en dat mensen anders met elkaar om moeten gaan. Om dat voor elkaar te krijgen is er door iedereen heel hard gewerkt, is er een aantal fouten gemaakt, is heel veel van die fouten geleerd en is uiteindelijk een vrijwel nieuw bedrijf ontstaan. Het is tijd om dit bedrijf te verlaten. Ik ben benieuwd waar ik hierna terecht kom: ik ben aan het rondkijken naar mogelijkheden. Ik weet wèl: het bedrijf waar ik naar op zoek ben, is een bedrijf dat openstaat voor veranderingen. Veranderingen, maar dan wel met het oog voor het individu; mensen staan immers centraal in de software ontwikkeling! Ik heb er in ieder geval weer zin in!

    Read the article

  • How do you set max execution time of PHP's CLI component?

    - by cwd
    How do you set max execution time of PHP's CLI component? I have a CLI script that has gone into a infinite loop and I'm not sure how to kill it without restarting. I used quicksilver to launch it, so I can't press control+c at the command line. I tried running ps -A (show all processes) but php is not showing up in that list, so perhaps it has timed out on it's own - but how do you manually set the time limit? I tried to find information about where I should set the max_execution_time setting, I'm used to setting this for the version of PHP that runs with apache, but I have no idea where to set it for the version of PHP that lives in /usr/bin. I did see the follow quote, which does seem to be accurate (see screenshot below), but having an unlimited execution time doesn't seem like a good idea. Keep in mind that for CLI SAPI max_execution_time is hardcoded to 0. So it seems to be changed by ini_set or set_time_limit but it isn't, actually. The only references I've found to this strange decision are deep in bugtracker (http://bugs.php.net/37306) and in php.ini (comments for 'max_execution_time' directive). (via http://php.net/manual/en/function.set-time-limit.php) ini_set('max_execution_time') has no effect. I also tried the same thing and go the same result with set_time_limit(7).

    Read the article

  • A tale from a Stalker

    - by Peter Larsson
    Today I thought I should write something about a stalker I've got. Don't get me wrong, I have way more fans than stalkers, but this stalker is particular persistent towards me. It all started when I wrote about Relational Division with Sets late last year(http://weblogs.sqlteam.com/peterl/archive/2010/07/02/Proper-Relational-Division-With-Sets.aspx) and no matter what he tried, he didn't get a better performing query than me. But this I didn't click until later into this conversation. He must have saved himself for 9 months before posting to me again. Well... Some days ago I get an email from someone I thought i didn't know. Here is his first email Hi, I want a proper solution for achievement the result. The solution must be standard query, means no using as any native code like TOP clause, also the query should run in SQL Server 2000 (no CTE use). We have a table with consecutive keys (nbr) that is not exact sequence. We need bringing all values related with nearest key in the current key row. See the DDL: CREATE TABLE Nums(nbr INTEGER NOT NULL PRIMARY KEY, val INTEGER NOT NULL); INSERT INTO Nums(nbr, val) VALUES (1, 0),(5, 7),(9, 4); See the Result: pre_nbr     pre_val     nbr         val         nxt_nbr     nxt_val ----------- ----------- ----------- ----------- ----------- ----------- NULL        NULL        1           0           5           7 1           0           5           7           9           4 5           7           9           4           NULL        NULL The goal is suggesting most elegant solution. I would like see your best solution first, after that I will send my best (if not same with yours)   Notice there is no name, no please or nothing polite asking for my help. So, on the top of my head I sent him two solutions, following the rule "Work on SQL Server 2000 and only standard non-native code".     -- Peso 1 SELECT               pre_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.pre_nbr                              ) AS pre_val,                              d.nbr,                              d.val,                              d.nxt_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.nxt_nbr                              ) AS nxt_val FROM                (                                                           SELECT               (                                                                                                                     SELECT               MAX(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr < n.nbr                                                                                        ) AS pre_nbr,                                                                                        n.nbr,                                                                                        n.val,                                                                                        (                                                                                                                     SELECT               MIN(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr > n.nbr                                                                                        ) AS nxt_nbr                                                           FROM                dbo.Nums AS n                              ) AS d -- Peso 2 CREATE TABLE #Temp                                                         (                                                                                        ID INT IDENTITY(1, 1) PRIMARY KEY,                                                                                        nbr INT,                                                                                        val INT                                                           )   INSERT                                            #Temp                                                           (                                                                                        nbr,                                                                                        val                                                           ) SELECT                                            nbr,                                                           val FROM                                             dbo.Nums ORDER BY         nbr   SELECT                                            pre.nbr AS pre_nbr,                                                           pre.val AS pre_val,                                                           t.nbr,                                                           t.val,                                                           nxt.nbr AS nxt_nbr,                                                           nxt.val AS nxt_val FROM                                             #Temp AS pre RIGHT JOIN      #Temp AS t ON t.ID = pre.ID + 1 LEFT JOIN         #Temp AS nxt ON nxt.ID = t.ID + 1   DROP TABLE    #Temp Notice there are no indexes on #Temp table yet. And here is where the conversation derailed. First I got this response back Now my solutions: --My 1st Slt SELECT T2.*, T1.*, T3.*   FROM Nums AS T1        LEFT JOIN Nums AS T2          ON T2.nbr = (SELECT MAX(nbr)                         FROM Nums                        WHERE nbr < T1.nbr)        LEFT JOIN Nums AS T3          ON T3.nbr = (SELECT MIN(nbr)                         FROM Nums                        WHERE nbr > T1.nbr); --My 2nd Slt SELECT MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END) AS pre_nbr,        (SELECT val FROM Nums WHERE nbr = MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END)) AS pre_val,        N1.nbr AS cur_nbr, N1.val AS cur_val,        MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END) AS nxt_nbr,        (SELECT val FROM Nums WHERE nbr = MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END)) AS nxt_val   FROM Nums AS N1,        Nums AS N2  GROUP BY N1.nbr, N1.val;   /* My 1st Slt Table 'Nums'. Scan count 7, logical reads 14 My 2nd Slt Table 'Nums'. Scan count 4, logical reads 23 Peso 1 Table 'Nums'. Scan count 9, logical reads 28 Peso 2 Table '#Temp'. Scan count 0, logical reads 7 Table 'Nums'. Scan count 1, logical reads 2 Table '#Temp'. Scan count 3, logical reads 16 */  To this, I emailed him back asking for a scalability test What if you try with a Nums table with 100,000 rows? His response to that started to get nasty.  I have to say Peso 2 is not acceptable. As I said before the solution must be standard, ORDER BY is not part of standard SELECT. Try this without ORDER BY:  Truncate Table Nums INSERT INTO Nums (nbr, val) VALUES (1, 0),(9,4), (5, 7)  So now we have new rules. No ORDER BY because it's not standard SQL! Of course I asked him  Why do you have that idea? ORDER BY is not standard? To this, his replies went stranger and stranger Standard Select = Set-based (no any cursor) It’s free to know, just refer to Advanced SQL Programming by Celko or mail to him if you accept comments from him. What the stalker probably doesn't know, is that I and Mr Celko occasionally are involved in some conversation and thus we exchange emails. I don't know if this reference to Mr Celko was made to intimidate me either. So I answered him, still polite, this What do you mean? The SELECT itself has a ”cursor under the hood”. Now the stalker gets rude  But however I mean the solution must no containing any order by, top... No problem, I do not like Peso 2, it’s very non-intelligent and elementary. Yes, Peso 2 is elementary but most performing queries are... And now is the time where I started to feel the stalker really wanted to achieve something else, so I wrote to him So what is your goal? Have a query that performs well, or a query that is super-portable? My Peso 2 outperforms any of your code with a factor of 100 when using more than 100,000 rows. While I awaited his answer, I posted him this query Ok, here is another one -- Peso 3 SELECT             MAX(CASE WHEN d = 1 THEN nbr ELSE NULL END) AS pre_nbr,                    MAX(CASE WHEN d = 1 THEN val ELSE NULL END) AS pre_val,                    MAX(CASE WHEN d = 0 THEN nbr ELSE NULL END) AS nbr,                    MAX(CASE WHEN d = 0 THEN val ELSE NULL END) AS val,                    MAX(CASE WHEN d = -1 THEN nbr ELSE NULL END) AS nxt_nbr,                    MAX(CASE WHEN d = -1 THEN val ELSE NULL END) AS nxt_val FROM               (                              SELECT    nbr,                                        val,                                        ROW_NUMBER() OVER (ORDER BY nbr) AS SeqID                              FROM      dbo.Nums                    ) AS s CROSS JOIN         (                              VALUES    (-1),                                        (0),                                        (1)                    ) AS x(d) GROUP BY           SeqID + x.d HAVING             COUNT(*) > 1 And here is the stats Table 'Nums'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. It beats the hell out of your queries…. Now I finally got a response from my stalker and now I also clicked who he was. This is his reponse Why you post my original method with a bit change under you name? I do not like it. See: http://www.sqlservercentral.com/Forums/Topic468501-362-14.aspx ;WITH C AS ( SELECT seq_nbr, k,        DENSE_RANK() OVER(ORDER BY seq_nbr ASC) + k AS grp_fct   FROM [Sample]         CROSS JOIN         (VALUES (-1), (0), (1)         ) AS D(k) ) SELECT MIN(seq_nbr) AS pre_value,        MAX(CASE WHEN k = 0 THEN seq_nbr END) AS current_value,        MAX(seq_nbr) AS next_value   FROM C GROUP BY grp_fct HAVING min(seq_nbr) < max(seq_nbr); These posts: Posted Tuesday, April 12, 2011 10:04 AM Posted Tuesday, April 12, 2011 1:22 PM Why post a solution where will not work in SQL Server 2000? Wait a minute! His own solution is using both a CTE and a ranking function so his query will not work on SQL Server 2000! Bummer... The reference to "Me not like" are my exact words in a previous topic on SQLTeam.com and when I remembered the phrasing, I also knew who he was. See this topic http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 where he writes a query and posts it under my name, as if I wrote it. So I answered him this (less polite). Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea? Besides, “M S solution” doesn’t work.   This is the result I get pre_value        current_value                             next_value 1                           1                           5 1                           5                           9 5                           9                           9   And I did nothing like you did here, where you posted a solution which you “thought” I should write http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? After a few hours I get this email back. I don't fully understand it, but it's probably a language barrier. >>Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea?<< You right, but do not think you are the first creator of this.   >>Besides, “M S Solution” doesn’t work. This is the result I get <<   Why you get so unimportant mistake? See this post to correct it: Posted 4/12/2011 8:22:23 PM >> So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. <<  Again, why you get some unimportant incompatibility? You offer that solution for current goals not me  >> All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? <<  No, I only wanted to know who you will solve it. Now I know you do not have a special solution. No problem. No problem for me either. So I just answered him I am not the first, and you are not the first to come up with this idea. So what is your problem? I am pretty sure other people have come up with the same idea before us. I used this technique all the way back to 2007, see http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=93911 Let's see if he returns...  He did! >> So what is your problem? << Nothing Thanks for all replies; maybe we have some competitions in future, maybe. Also I like you but you do not attend it. Your behavior with me is not friendly. Not any meeting… Regards //Peso

    Read the article

  • AIA - Server Topologien und Deployment

    - by Hans Viehmann
    Es gibt ein neues White Paper, in dem die verschiedenen Varianten von AIA Topologien ausführlich beschrieben sind. Die unterschiedlichen Möglichkeiten ergeben sich vor allem aus dem Betrieb mehrerer AIA Instanzen parallel (z.B. Aufteilung in Dev/Test/Prod, Aufteilung zur Lastverteilung, Zuordnung zu unterschiedlichen Geografien, ...). Weitere Variationsmöglichkeiten bietet die Verteilung der unterschiedlichen Bestandteile des DB-Schemas (AIA Metadaten, Cross-Referencing Daten, AIA Lifecycle Repository, JMS Schema, etc.). Und schließlich können auch verschiedene Topologien der SOA Laufzeitumgebung, wie beispielsweise SOA Cluster, unterstützt werden. Diese Varianten mit ihren typischen Einsatzfeldern werden in dem Dokument ebenso beschrieben, wie deren Umsetzung mit Hilfe des Installers.In einem zweiten Teil befasst sich das White Paper mit dem Deployment der AIA Artefakte in diese Topologien, und zwar insbesondere mit dem Tool-Support durch die "Project Lifecycle Workbench" und dem "AIA Installation Driver". Erstere ist eine neue Komponente in AIA 11gR1, die in der Lage ist, am Ende des Entwicklungszyklus Deployment Pläne zu generieren, während der ebenfalls neue "AIA Installation Driver" dann die Artefakte dann in die Zielumgebung überträgt.Das White Paper ist noch nicht auf oracle.com verfügbar und ist leider so umfangreich, dass es nicht auf auf blogs.oracle.com hochgeladen werden kann. Bei Interesse verschicke ich es aber gern per Mail.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >