Search Results

Search found 11262 results on 451 pages for 'important directories'.

Page 294/451 | < Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >

  • Action -methods vs public methods in PHP frameworks

    - by Tower
    There are plenty of PHP frameworks out there as many of you know, and I am interested in your thoughts on this: Zend Framework has so-called action controllers that must contain at least one action method, a method whose name ends in "Action". For example: public function indexAction() {} The word "Action" is important, without it you can't access the method directly via the URI. However, in some other frameworks like Kohana you have public and private methods, where public methods are accessible and private are not. So my question is which do you think is a better approach? From a secure point of view I would vote Zend's approach, but I am interested in knowing what others think.

    Read the article

  • Most commonly occurring string in mysql column.

    - by MILESMIBALERR
    I am making a website where users can vote on which category a page is. They can vote that the page is category, for example: a, b, c, or d. Please don't ask what I am using this for, it is not important, I just want to know how to do it. I need to find the most commonly occurring category in the mysql row out of all the votes. Each time a user submits their vote, it submits the "category" that they voted for, and the "page_id". I have this so far: select page_id, category from categories group by page_id You cannot simply use a "COUNT(*) where category = 'a'" then repeat it for each category because there is many more categories in the actual project.

    Read the article

  • Comparison of ASP.Net Reporting Solutions

    - by Brian MacKay
    This week my team spent way too much time trying to do simple things in reporting services, and I've decided to start evaluating other options. I know there are some good options out there now that aren't too expensive. I've heard that Telerik, ActiveReports, and a few others are widely used. I was hoping to get some first hand accounts regarding reporting tools that you've used. Specifically, I definitely want to hear your thoughts about: Pain points and gotchas you ran into. Ease of use for report design. It's a little bizarre to me that Access still seems to hold the throne in this area! What's your favorite tool? Anything I've missed that seems important to you. Thanks a lot!

    Read the article

  • How to create a Windows 7 installation usb media from linux ? (to install Windows 7) - Help need to know better method

    - by Abel Coto
    I have been reading some web pages and posts here and in other forums about how to create a Windows 7 installation Usb media (to install windows 7 using a usb) from linux. I asked in technet about this , and they give me general ideas about how to do it I personally am not very familiar with linux, but basicaly all that you need to do... in whatever way you do it is the following: Format a usb flash drive, either fat32 or ntfs create a partition that is large enough to host the windows installation (give or take 3GB for 64bit, aroudn 2.5gb for 32bit) and mark that partition as active/bootable. Since this can be done with windows, but just as well with a tool like gparted, you should be able to do the same in debian. Once you have created that partition, mount the iso that you download, and copy all files starting from the root, into the root of the usb flash drive. That's all there's to it. There is a method that i found in various places,that is almost the same that the man of technet has said. But,there is a step,that in that method is done,that i don't know if it is really necessary,or not. Not allways dd works.Basically, the missing step was to write a proper boot sector to the usb stick, which can be done from linux with ms-sys. This works with the Win7 retail version. Here is the complete rundown again: Install ms-sys Check what device your usb media is asigned - here we will assume it is /dev/sdb. Delete all partitions, create a new one taking up all the space, set type to NTFS, and set it bootable: *# cfdisk /dev/sdb* Create NTFS filesystem: *# mkfs.ntfs -f /dev/sdb1* Mount iso and usb media: *# mount -o loop win7.iso /mnt/iso # mount /dev/sdb1 /mnt/usb* Copy over all files: *# cp -r /mnt/iso/* /mnt/usb/* Write Windows 7 MBR on usb stick: *# ms-sys -7 /dev/sdb* ...and you're done. Shouldn't the usb work without doing the last step "# ms-sys -7 /dev/sdb" or to make the usb bootable , is a must , not only to mark the partition as bootable ? Would be better use rsync instead of cp -r ? All this steps should be done as root, i suppose , or if not , chmod to 664 and chown the directories where are mounted the usb and the iso, no ? But i suppose that the easier thing is to copy the data as root , and that this will not affect to the data. Has anyone tried this method or some similar like copying the iso with dd ?

    Read the article

  • Handling two WebException's properly

    - by baron
    Hi Everyone, I am trying to handle two different WebException's properly. Basically they are handled after calling WebClient.DownloadFile(string address, string fileName) AFAIK, so far there are two I have to handle, both WebException's: The remote name could not be resolved (i.e. No network connectivity to access server to download file) (404) File not nound (i.e. the file doesn't exist on the server) There may be more but this is what I've found most important so far. So how should I handle this properly, as they are both WebException's but I want to handle each case above differently. This is what I have so far: try { using (var client = new WebClient()) { client.DownloadFile("..."); } } catch(InvalidOperationException ioEx) { if (ioEx is WebException) { if (ioEx.Message.Contains("404") { //handle 404 } if (ioEx.Message.Contains("remote name could not") { //handle file doesn't exist } } } As you can see I am checking the message to see what type of WebException it is. I would assume there is a better or a more precise way to do this? Thanks

    Read the article

  • How to round CGFloat

    - by Johannes Jensen
    I made this method + (CGFloat) round: (CGFloat)f { int a = f; CGFloat b = a; return b; } It works as expected but it only rounds down. And if it's a negative number it still rounds down. This was just a quick method I made, it isn't very important that it rounds correctly, I just made it to round the camera's x and y values for my game. Is this method okay? Is it fast? Or is there a better solution?

    Read the article

  • Java 1.4 to Java 6 migration

    - by joesatch
    Hi, I have some enterprise apps running on Java 1.4. They mostly invoke Stored Proces on DB, Parse XML files (Not too large files, at the most few megs), read/write from/to disk. We have a requirement where now we have to migrate these apps to Java 6(No code changes to be done at all). My questions: If I dont recompile my apps under Java 6 and just run them with it, will it work fine (I know they 'should'). But if somebody thinks other way round, could you kindly share your thoughts please? More important question is - Will it have any perfomance impact?. As in, App compiled on 1.4 and running on 1.6 vs App compiled and running on 1.6. Is 1.6 gonna do any bytecode optimization for the same old peace of code compared to 1.4? Many Thanks js

    Read the article

  • autocomplete not working on one sever, works on others

    - by dogmatic69
    I have Ubuntu 10.10 x64 and x86 running on various servers and auto complete works on all of them bar one. The issue: apt-<tab> would show a list of options but sudo apt-<tab> would not. After fiddling with it for a few hours i've found that /etc/bash_autocomplete did not exist. on the broken server. Copying the one from a working one it now works. but still not properly. sudo apt-get ins<tab> does not show do anything. listing the files in /etc/bash_autocomplete.d/ on the working server has about 50 files, and the broken one only two or three. i dont think that i can just copy these files though as it might show commands for things that are not even installed. TL;DR autocomplete broken, how can i fix it. Seems like its disabled somewhere, why is this EDIT: Ok, it was not ever installed... $ sudo apt-get install bash-completion Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed bash-completion 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 140kB of archives. After this operation, 1,061kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick-updates/main bash-completion all 1:1.2-2ubuntu1.1 [140kB] Fetched 140kB in 0s (174kB/s) Selecting previously deselected package bash-completion. (Reading database ... 23808 files and directories currently installed.) Unpacking bash-completion (from .../bash-completion_1%3a1.2-2ubuntu1.1_all.deb) ... Processing triggers for man-db ... Setting up bash-completion (1:1.2-2ubuntu1.1) ... its now kinda working, but still wonky... apt-get ins<tab> gives sudo apt-get insserv as the option. also apt-get install php5<tab> gives apt-get install php5/ not php5-* options.

    Read the article

  • Naming conventions and field naming question for CakePHP

    - by jphenow
    Okay so two questions very related: 1) Does following the naming convention for classes, controllers, database fields, etc. affect the framework's ability to work the way it was intended? (I'm a little new to working with a framework from the beginning of app development) 2) This question is more important if 1 is a yes. Say I have a table, A, that has 2 foreign keys pointing at the same table, B, but different entries (they're like edges of a graph that point at two vertices) how would I follow the naming convention of their database fields? All I can think to do is something like vertex_1_id and vertex_2_id but I don't know how the framework would handle that if the naming conventions are necessary for its functioning correctly.

    Read the article

  • What is the fastest way to unzip textfiles in Matlab during a function?

    - by Paul
    Hello all, I would like to scan text of textfiles in Matlab with the textscan function. Before I can open the textfile with fid = fopen('C:\path'), I need to unzip the files first. The files have the extension: *.gz There are thousands of files which I need to analyze and high performance is important. I have two ideas: (1) Use an external program an call it from the command line in Matlab (2) Use a Matlab 'zip'toolbox. I have heard of gunzip, but don't know about its performance. Does anyone knows a way to unzip these files as quick as possible from within Matlab? Thanks!

    Read the article

  • SLES 9 vs. SLES 10

    - by Michael Covelli
    Are there any important change in how SLES 10 implements Tcp sockets vs. SLES 9? I have several apps written in C# (.NET 3.5) that run on Windows XP and Windows Server 2003. They've been running fine for over a year, getting market data from a SLES 9 machine using a socket connection. The machine was upgraded today to SLES 10 and its causing some strange behavior. The socket normally returns a few hundred or thousand bytes every second. But occasionally, I stop receiving data. Ten or more seconds will go by with no data and then Receive returns with a 10k+ bytes. And some buffer is causing data loss because the bytes I receive on the socket no longer make a correct packet. The only thing changed was the SLES 9 to 10 upgrade. And rolling back fixes this immediately. Any ideas?

    Read the article

  • How can I make VS2010 behave like VS2008 w/r/t indentation?

    - by Portman
    Situation I have a plain text file where indentation is important. line 1 line 1.1 (indented two spaces) line 1.2 (indented two spaces) line 1.2.3 (indented four spaces) In Visual Studio 2008, when I pressed enter, the next line would also be indented four spaces. However, in Visual Studio 2010, when I press enter, the next line is indented one tab. Question Does anybody know where, in the mountain of preferences under Tools Options, I can return to the way that Visual Studio 2008 worked? Under Options Text Editor Plain Text Tabs, I see the following: If I select "None", then I get no indentation when I move to the next line. If I select "Block", then I get TAB indentation (even though the previous line is spaces). In Visual Studio 2008, my indentation is set to "Block", and I get spaces. I have no idea what "Smart" indenting is, or why it is disabled.

    Read the article

  • Use python decorators on class methods and subclass methods

    - by AlexH
    Goal: Make it possible to decorate class methods. When a class method gets decorated, it gets stored in a dictionary so that other class methods can reference it by a string name. Motivation: I want to implement the equivalent of ASP.Net's WebMethods. I am building this on top of google app engine, but that does not affect the point of difficulty that I am having. How it Would look if it worked: class UsefulClass(WebmethodBaseClass): def someMethod(self, blah): print(blah) @webmethod def webby(self, blah): print(blah) # the implementation of this class could be completely different, it does not matter # the only important thing is having access to the web methods defined in sub classes class WebmethodBaseClass(): def post(self, methodName): webmethods[methodName]("kapow") ... a = UsefulClass() a.post("someMethod") # should error a.post("webby") # prints "kapow" There could be other ways to go about this. I am very open to suggestions

    Read the article

  • F# currying efficiency?

    - by Eamon Nerbonne
    I have a function that looks as follows: let isInSet setElems normalize p = normalize p |> (Set.ofList setElems).Contains This function can be used to quickly check whether an element is semantically part of some set; for example, to check if a file path belongs to an html file: let getLowerExtension p = (Path.GetExtension p).ToLowerInvariant() let isHtmlPath = isInSet [".htm"; ".html"; ".xhtml"] getLowerExtension However, when I use a function such as the above, performance is poor since evaluation of the function body as written in "isInSet" seems to be delayed until all parameters are known - in particular, invariant bits such as (Set.ofList setElems).Contains are reevaluated each execution of isHtmlPath. How can best I maintain F#'s succint, readable nature while still getting the more efficient behavior in which the set construction is preevaluated. The above is just an example; I'm looking for a general pattern that avoids bogging me down in implementation details - where possible I'd like to avoid being distracted by details such as the implementation's execution order since that's usually not important to me and kind of undermines a major selling point of functional programming.

    Read the article

  • How to improve this bash shell script for turning hardlinks into symlinks?

    - by MountainX
    This shell script is mostly the work of other people. It has gone through several iterations, and I have tweaked it slightly while also trying to fully understand how it works. I think I understand it now, but I don't have confidence to significantly alter it on my own and risk losing data when I run the altered version. So I would appreciate some expert guidance on how to improve this script. The changes I am seeking are: make it even more robust to any strange file names, if possible. It currently handles spaces in file names, but not newlines. I can live with that (because I try to find any file names with newlines and get rid of them). make it more intelligent about which file gets retained as the actual inode content and which file(s) become sym links. I would like to be able to choose to retain the file that is either a) the shortest path, b) the longest path or c) has the filename with the most alpha characters (which will probably be the most descriptive name). allow it to read the directories to process either from parameters passed in or from a file. optionally, write a long of all changes and/or all files not processed. Of all of these, #2 is the most important for me right now. I need to process some files with it and I need to improve the way it chooses which files to turn into symlinks. (I tried using things like the find option -depth without success.) Here's the current script: #!/bin/bash # clean up known problematic files first. ## find /home -type f -wholename '*Icon* ## *' -exec rm '{}' \; # Configure script environment # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ set -o nounset dir='/SOME/PATH/HERE/' # For each path which has multiple links # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # (except ones containing newline) last_inode= while IFS= read -r path_info do #echo "DEBUG: path_info: '$path_info'" inode=${path_info%%:*} path=${path_info#*:} if [[ $last_inode != $inode ]]; then last_inode=$inode path_to_keep=$path else printf "ln -s\t'$path_to_keep'\t'$path'\n" rm "$path" ln -s "$path_to_keep" "$path" fi done < <( find "$dir" -type f -links +1 ! -wholename '* *' -printf '%i:%p\n' | sort --field-separator=: ) # Warn about any excluded files # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ buf=$( find "$dir" -type f -links +1 -path '* *' ) if [[ $buf != '' ]]; then echo 'Some files not processed because their paths contained newline(s):'$'\n'"$buf" fi exit 0

    Read the article

  • Python decoding issue with hashlib.digest() method

    - by Sorw
    Hello StackOverflow community, Using Google App Engine, I wrote a keyToSha256() method within a model class (extending db.Model) : class Car(db.Model): def keyToSha256(self): keyhash = hashlib.sha256(str(self.key())).digest() return keyhash When displaying the output (ultimately within a Django template), I get garbled text, for example : ?????_??!`?I?!?;?QeqN??Al?'2 I was expecting something more in line with this : 9f86d081884c7d659a2feaa0c55ad015a3bf4f1b2b0b822cd15d6c15b0f00a08 Am I missing something important ? Despite reading several guides on ASCII, Unicode, utf-8 and the like, I think I'm still far from mastering the secrets of string encoding/decoding. After browsing StackOverflow and searching for insights via Google, I figured out I should ask the question here. Any idea ? Thanks !

    Read the article

  • How to compile all source files (default make target does not compile all of them)

    - by Piotr Krukowiecki
    Hi, when I compile android (http://source.android.com/download) it does not compile some source files. For example there is external/bluetooth/bluez/sbc/sbc.c which is not compiled. There are also other such files. It's possible those files need not to be compiled. Or it might be that I need some special configuration to compile them. Either way, if it is possible, I'd like to compile them. Is there some way to do it? Maybe some "compile_all" make target? (I believe the reason why I want to compile all source files is not important)

    Read the article

  • Postfix not delivering email using Maildir

    - by Greg K
    I've followed this guide to get postfix set up. I've not completed it yet, as from sending test emails, email is no longer being delivered since switching to Maildir from mbox. I have created a Maildir directory with cur, new and tmp sub directories. ~$ ll drwxrwxr-x 5 greg greg 4096 2012-07-07 16:40 Maildir/ ~$ ll Maildir/ drwxrwxr-x 2 greg greg 4096 2012-07-07 16:40 cur drwxrwxr-x 2 greg greg 4096 2012-07-07 16:40 new drwxrwxr-x 2 greg greg 4096 2012-07-07 16:40 tmp Send a test email. ~$ netcat mail.example.com 25 220 ubuntu ESMTP Postfix (Ubuntu) ehlo example.com 250-ubuntu 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN mail from: [email protected] 250 2.1.0 Ok rcpt to: [email protected] 250 2.1.5 Ok data 354 End data with <CR><LF>.<CR><LF> Subject: test email Hi, Just testing. . 250 2.0.0 Ok: queued as 56B541EA53 quit 221 2.0.0 Bye Check the mail queue. ~$ mailq Mail queue is empty Check if mail has been delivered. ~$ ls -l Maildir/new total 0 Some postfix settings: ~$ sudo postconf home_mailbox home_mailbox = Maildir/ ~$ sudo postconf mailbox_command mailbox_command = /var/log/mail.log Jul 7 16:57:33 li305-246 postfix/smtpd[21039]: connect from example.com[178.79.168.xxx] Jul 7 16:58:14 li305-246 postfix/smtpd[21039]: 56B541EA53: client=example.com[178.79.168.xxx] Jul 7 16:58:33 li305-246 postfix/cleanup[21042]: 56B541EA53: message-id=<20120707155814.56B541EA53@ubuntu> Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 56B541EA53: from=<[email protected]>, size=321, nrcpt=1 (queue active) Jul 7 16:58:33 li305-246 postfix/smtp[21043]: 56B541EA53: to=<[email protected]>, relay=none, delay=30, delays=30/0.01/0/0, dsn=5.4.6, status=bounced (mail for example.com loops back to myself) Jul 7 16:58:33 li305-246 postfix/cleanup[21042]: 1F68B1EA55: message-id=<20120707155833.1F68B1EA55@ubuntu> Jul 7 16:58:33 li305-246 postfix/bounce[21044]: 56B541EA53: sender non-delivery notification: 1F68B1EA55 Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 1F68B1EA55: from=<>, size=1999, nrcpt=1 (queue active) Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 56B541EA53: removed Jul 7 16:58:33 li305-246 postfix/smtp[21043]: 1F68B1EA55: to=<[email protected]>, relay=none, delay=0, delays=0/0/0/0, dsn=5.4.6, status=bounced (mail for example.com loops back to myself) Jul 7 16:58:33 li305-246 postfix/qmgr[20882]: 1F68B1EA55: removed Jul 7 16:58:36 li305-246 postfix/smtpd[21039]: disconnect from domain.me[178.79.168.xxx] Jul 7 17:10:38 li305-246 postfix/master[20878]: terminating on signal 15 Jul 7 17:10:39 li305-246 postfix/master[21254]: daemon started -- version 2.8.5, configuration /etc/postfix Any ideas?

    Read the article

  • Are there some general Network programming best practices?

    - by uriDium
    I am implementing some networking stuff in our project. It has been decided that the communication is very important and we want to do it synchronously. So the client sends something the server acknowledges. Are there some general best practices for the interaction between the client and the server. For instance if there isn't an answer from the server should the client automatically retry? Should there be a timeout period before it retries? What happens if the acknowledgement fails? At what point do we break the connection and reconnect? Is there some material? I have done searches but nothing is really coming up. I am looking for best practices in general. I am implementing this in c# (probably with sockets) so if there is anything .Net specific then please let me know too.

    Read the article

  • How to create a UDF that takes a query string and returns the query's resultset

    - by Martin
    I want to create a stored procedure that takes a simple SELECT statement and return the resultset as a CSV string. So the basic idea is get the sql statement from user input, run it using EXEC(@stmt) and convert the resultset to text using cursors. However, as SQLServer doesn't allow: select * from storedprocedure(@sqlStmt) UDF with EXEC(@sqlStmt) so I tried Insert into #tempTable EXEC(@sqlStmt), but this doesn't work (error = "invalid object name #tempTable"). I'm stuck. Could you please shed some light on this matter? Many thanks EDIT: Actually the output (e.g CSV string) is not important. The problem is I don't know how to assign a cursor to the resultset returned by EXEC. SP and UDF do not work with Exec() while creating a temp table before inserting values is impossible without knowing the input statement. I thought of OPENQUERY but it does not accept variables as its parameters.

    Read the article

  • Elasticsearch won't start anymore

    - by Oleander
    I restarted my elasticsearch instance 5 days ago and I haven't manage to start it since then. I get no output in the log file /var/log/elasticsearch/ nor does the elasticsearch binary print any information when running at using elasticsearch -f. I once manage to get this output. [2012-11-15 22:51:18,427][INFO ][node ] [Piper] {0.19.11}[29584]: initializing ... [2012-11-15 22:51:18,433][INFO ][plugins ] [Piper] loaded [], sites [] Running curl http://localhost:9200 resulted in curl: (7) couldn't connect to host. I've tried increasing the memory from 3gb to 10gb, but that didn't make any diffrence. Running /etc/init.d/elasticsearch start takes 30 seconds. ps aux | grep elasticsearch results in this output. /usr/local/share/elasticsearch/bin/service/exec/elasticsearch-linux-x86-64 /usr/local/share/elasticsearch/bin/service/elasticsearch.conf wrapper.syslog.ident=elasticsearch wrapper.pidfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.pid wrapper.name=elasticsearch wrapper.displayname=ElasticSearch wrapper.daemonize=TRUE wrapper.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.status wrapper.java.statusfile=/usr/local/share/elasticsearch/bin/service/./elasticsearch.java.status wrapper.script.version=3.5.14 /usr/lib/jvm/java-7-openjdk-amd64/jre/bin/java -Delasticsearch-service -Des.path.home=/usr/local/share/elasticsearch -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Xms1024m -Xmx1024m -Djava.library.path=/usr/local/share/elasticsearch/bin/service/lib -classpath /usr/local/share/elasticsearch/bin/service/lib/wrapper.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/elasticsearch-0.19.11.jar:/usr/local/share/elasticsearch/lib/jna-3.3.0.jar:/usr/local/share/elasticsearch/lib/log4j-1.2.17.jar:/usr/local/share/elasticsearch/lib/lucene-analyzers-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-core-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-highlighter-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-memory-3.6.1.jar:/usr/local/share/elasticsearch/lib/lucene-queries-3.6.1.jar:/usr/local/share/elasticsearch/lib/snappy-java-1.0.4.1.jar:/usr/local/share/elasticsearch/lib/sigar/sigar-1.6.4.jar -Dwrapper.key=k7r81VpK3_Bb3N_5 -Dwrapper.port=32000 -Dwrapper.jvm.port.min=31000 -Dwrapper.jvm.port.max=31999 -Dwrapper.disable_console_input=TRUE -Dwrapper.pid=23888 -Dwrapper.version=3.5.14 -Dwrapper.native_library=wrapper -Dwrapper.service=TRUE -Dwrapper.cpu.timeout=10 -Dwrapper.jvmid=1 org.tanukisoftware.wrapper.WrapperSimpleApp org.elasticsearch.bootstrap.ElasticSearchF My current system: ElasticSearch Version: 0.19.11, JVM: 23.2-b09 Ubuntu 12.04 LTS I've tried re-install elasticsearch, removing old directories. Why can't I get it to start?

    Read the article

  • Assinging a GD reference to a new variable fails to copy

    - by Stomped
    This is a contrived example, but it illustrates my problem much more concisely then the code I'm using - and I've tested this and it exhibits the problem: $image = imagecreatefromjpeg('test.jpg'); $copy_of_image = $image; // The important bit imagedestroy($image); header('Content-type: image/jpeg'); imagejpeg($copy_of_image); Now, my expectation is that $copy_of_image is exactly that, but when I run this, it fails, printing out the URL of the script of all things. Comment out the imagedestroy() and it works just fine. a var_dump of $image provides: resource(3) of type (gd) So why can't I copy this? Apparently the assignment $copy_of_image = $image is creating a reference rather then a copy - is there a way to prevent that?

    Read the article

  • Android: Image displayed at Webview from Url with high quality loss

    - by Merlino
    I want to display an image from an url with an Webview at Android. With Android phones with Version 1.5 and 1.6 there is no problem. but the same pic and the same code on an AndroidPhone with Version 2.0 and the pic is totaly pixelated. Like Android is resizing the image first to a smaller one and then resizing it back to "normal" size. Unfortunately its important to display the pic without any quality loss. I tried to integrate it in the sourcefolder to show it as an normal image, but at Android 2.0 i get an exception because the image is to big. (At Android 1.6 there is no problem) Any ideas how i can display the image without quality loss with Android 2.0 ?

    Read the article

  • Library to parse ERB files

    - by Douglas Sellers
    I am attempting to parse, not evaluate, rails ERB files in a Hpricot/Nokogiri type manner. The files I am attempting to parse contain HTML fragments intermixed with dynamic content generated using ERB (standard rails view files) I am looking for a library that will not only parse the surrounding content, much the way that Hpricot or Nokogiri will but will also treat the ERB symbols, <%, <%= etc, as though they were html/xml tags. Ideally I would get back a DOM like structure where the <%, <%= etc symbols would be included as their own node types. I know that it is possible to hack something together using regular expressions but I was looking for something a bit more reliable as I am developing a tool that I need to run on a very large view code base where both the html content and the erb content are important. For example, content such as: blah blah blah <divMy Great Text <%= my_dynamic_expression %</div Would return a tree structure like: root - text_node (blah blah blah) - element (div) - text_node (My Great Text ) - erb_node (<%=)

    Read the article

  • WCF SSL secure transfer or large payloads without changing firewall.

    - by Sir Mix
    I need to transfer small amounts of data intermittently from clients to our server in a secure fashion and pull down large binary files from the server ocassionally. It's important for all this to be reliable. I'm anticipating 100,000 clients. I control both ends, but I want to deliver a solution that doesn't require changing the firewall for the majority of customers. A lag of one or two minutes before the information migrates to the server or comes down seems to be acceptable at this time. We need to make the connection secure, so was thinking about SSL, but open to suggestions. Basically, what is the best binding to use in this situation so that we have a secure transmission and the system handles the stress and load in a way that works for 95% of clients out of the box (firewalls will not block in majority of firewall configurations).

    Read the article

< Previous Page | 290 291 292 293 294 295 296 297 298 299 300 301  | Next Page >