Search Results

Search found 17406 results on 697 pages for 'option explicit'.

Page 660/697 | < Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >

  • Discovering a functional algorithm from a mutable one

    - by Garrett Rowe
    This isn't necessarily a Scala question, it's a design question that has to do with avoiding mutable state, functional thinking and that sort. It just happens that I'm using Scala. Given this set of requirements: Input comes from an essentially infinite stream of random numbers between 1 and 10 Final output is either SUCCEED or FAIL There can be multiple objects 'listening' to the stream at any particular time, and they can begin listening at different times so they all may have a different concept of the 'first' number; therefore listeners to the stream need to be decoupled from the stream itself. Pseudocode: if (first number == 1) SUCCEED else if (first number >= 9) FAIL else { first = first number rest = rest of stream for each (n in rest) { if (n == 1) FAIL else if (n == first) SUCCEED else continue } } Here is a possible mutable implementation: sealed trait Result case object Fail extends Result case object Succeed extends Result case object NoResult extends Result class StreamListener { private var target: Option[Int] = None def evaluate(n: Int): Result = target match { case None => if (n == 1) Succeed else if (n >= 9) Fail else { target = Some(n) NoResult } case Some(t) => if (n == t) Succeed else if (n == 1) Fail else NoResult } } This will work but smells to me. StreamListener.evaluate is not referentially transparent. And the use of the NoResult token just doesn't feel right. It does have the advantage though of being clear and easy to use/code. Besides there has to be a functional solution to this right? I've come up with 2 other possible options: Having evaluate return a (possibly new) StreamListener, but this means I would have to make Result a subtype of StreamListener which doesn't feel right. Letting evaluate take a Stream[Int] as a parameter and letting the StreamListener be in charge of consuming as much of the Stream as it needs to determine failure or success. The problem I see with this approach is that the class that registers the listeners should query each listener after each number is generated and take appropriate action immediately upon failure or success. With this approach, I don't see how that could happen since each listener is forcing evaluation of the Stream until it completes evaluation. There is no concept here of a single number generation. Is there any standard scala/fp idiom I'm overlooking here?

    Read the article

  • Startup in Windows 7

    - by iira
    Hi, I am trying to add my program run in Windows 7 startup, but it does'nt works. My program has embedded uac manifest. My current way is by adding String Value at HKCU..\Run I found a manual solution for Vista from http://social.technet.microsoft.com/Forums/en/w7itprosecurity/thread/81c3c1f2-0169-493a-8f87-d300ea708ecf 1. Click Start, right click on Computer and choose “Manage”. 2. Click “Task Scheduler” on the left panel. 3. Click “Create Task” on the right panel. 4. Type a name for the task. 5. Check “Run with highest privileges”. 6. Click Actions tab. 7. Click “New…”. 8. Browse to the program in the “Program/script” box. Click OK. 9. On desktop, right click, choose New and click “Shortcut”. 10. In the box type: schtasks.exe /run /tn TaskName where TaskName is the name of task you put in on the basics tab and click next. 11. Type a name for the shortcut and click Finish. Additionally, you need to run the saved scheduled task shortcut to run the program instead of running the application shortcut to ignore the IAC prompt. When startup the system will run the program via the original shortcut. Therefore you need to change the location to run the saved task. Please: 1. Open Regedit. 2. Find the entry of the startup item in Registry. It will be stored in one of the following branches. HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\Run HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run 3. Double-click on the correct key, change the path to the saved scheduled task you created. Is there any free code to add item with privileges option in scheduled task? I havent found the free one in torry.net Thanks a lot.

    Read the article

  • What is the best practice for adding persistence to an MVC model?

    - by etheros
    I'm in the process of implementing an ultra-light MVC framework in PHP. It seems to be a common opinion that the loading of data from a database, file etc. should be independent of the Model, and I agree. What I'm unsure of is the best way to link this "data layer" into MVC. Datastore interacts with Model //controller public function update() { $model = $this->loadModel('foo'); $data = $this->loadDataStore('foo', $model); $data->loadBar(9); //loads data and populates Model $model->setBar('bar'); $data->save(); //reads data from Model and saves } Controller mediates between Model and Datastore Seems a bit verbose and requires the model to know that a datastore exists. //controller public function update() { $model = $this->loadModel('foo'); $data = $this->loadDataStore('foo'); $model->setDataStore($data); $model->getDataStore->loadBar(9); //loads data and populates Model $model->setBar('bar'); $model->getDataStore->save(); //reads data from Model and saves } Datastore extends Model What happens if we want to save a Model extending a database datastore to a flatfile datastore? //controller public function update() { $model = $this->loadHybrid('foo'); //get_class == Datastore_Database $model->loadBar(9); //loads data and populates $model->setBar('bar'); $model->save(); //saves } Model extends datastore This allows for Model portability, but it seems wrong to extend like this. Further, the datastore cannot make use of any of the Model's methods. //controller extends model public function update() { $model = $this->loadHybrid('foo'); //get_class == Model $model->loadBar(9); //loads data and populates $model->setBar('bar'); $model->save(); //saves } EDIT: Model communicates with DAO //model public function __construct($dao) { $this->dao = $dao; } //model public function setBar($bar) { //a bunch of business logic goes here $this->dao->setBar($bar); } //controller public function update() { $model = $this->loadModel('foo'); $model->setBar('baz'); $model->save(); } Any input on the "best" option - or alternative - is most appreciated.

    Read the article

  • Handling exceptions raised in observers

    - by sparky
    I have a Rails (2.3.5) application where there are many groups, and each Group has_many People. I have a Group edit form where users can create new people. When a new person is created, they are sent an email (the email address is user entered on the form). This is accomplished with an observer on the Person model. The problem comes when ActionMailer throws an exception - for example if the domain does not exist. Clearly that cannot be weeded out with a validation. There would seem to be 2 ways to deal with this: A begin...rescue...end block in the observer around the mailer. The problem with this is that the only way to pass any feedback to the user would be to set a global variable - as the observer is out of the MVC flow, I can't even set a flash[:error] there. A rescue_from in the Groups controller. This works fine, but has 2 problems. Firstly, there is no way to know which person threw the exception (all I can get is the 503 exception, no way to know which person caused the problem). This would be useful information to be able to pass back to the user - at the moment, there is no way for me to let them know which email address is the problem - at the moment, I just have to chuck the lot back at them, and issue an unhelpful message saying that one of them is not correct. Secondly (and to a certain extent this make the first point moot) it seems that it is necessary to call a render in the rescue_from, or it dies with a rather bizarre "can't convert Array into String" error from webbrick, with no stack trace & nothing in the log. Thus, I have to throw it back to the user when I come across the first error and have to stop processing the rest of the emails. Neither of the solutions are optimal. It would seem that the only way to get Rails to do what I want is option 1, and loathsome global variables. This would also rely on Rails being single threaded. Can anyone suggest a better solution to this problem?

    Read the article

  • How to change the extension of a processed xml file (using eXist & cocoon)

    - by Carsten C.
    Hi all, I'm really new to this whole web stuff, so please be nice if I missed something important to post. Short: Is there a possibility to change the name of a processed file (eXist-DB) after serialization? Here my case, the following request to my eXist-db: http://localhost:8080/exist/cocoon/db/caos/test.xml and I want after serialization the follwing (xslt is working fine): http://localhost:8080/exist/cocoon/db/caos/test.html I'm using the followong sitemap.xmap with cocoon (hoping this is responsible for it) <map:match pattern="db/caos/**"> <!-- if we have an xpath query --> <map:match pattern="xpath" type="request-parameter"> <map:generate src="xmldb:exist:///db/caos/{../1}/#{1}"/> <map:act type="request"> <map:parameter name="parameters" value="true"/> <map:parameter name="default.howmany" value="1000"/> <map:parameter name="default.start" value="1"/> <map:transform type="filter"> <map:parameter name="element-name" value="result"/> <map:parameter name="count" value="{howmany}"/> <map:parameter name="blocknr" value="{start}"/> </map:transform> <map:transform src=".snip./webapp/stylesheets/db2html.xsl"> <map:parameter name="block" value="{start}"/> <map:parameter name="collection" value="{../../1}"/> </map:transform> </map:act> <map:serialize type="html" encoding="UTF-8"/> </map:match> <!-- if the whole file will be displayed --> <map:generate src="xmldb:exist:/db/caos/{1}"/> <map:transform src="..snip../stylesheets/caos2soac.xsl"> <map:parameter name="collection" value="{1}"/> </map:transform> <map:transform type="encodeURL"/> <map:serialize type="html" encoding="UTF-8"/> </map:match> So my Question is: How do I change the extension of the test.xml to test.html after processing the xml file? Background: I'm generating some information out of some xml-dbs, this infos will be displayed in html (which is working), but i want to change some entrys later, after I generated the html site. To make this confortable, I want to use Jquery & Jeditable, but the code does not work on the xml files. Saving the generated html is not an option. tia for any suggestions [and|or] help CC Edit: After reading all over: could it be, that the extension is irrelevant and that this is only a problem of port 8080? I'm confused...

    Read the article

  • Autocomplete server-side implementation

    - by toluju
    What is a fast and efficient way to implement the server-side component for an autocomplete feature in an html input box? I am writing a service to autocomplete user queries in our web interface's main search box, and the completions are displayed in an ajax-powered dropdown. The data we are running queries against is simply a large table of concepts our system knows about, which matches roughly with the set of wikipedia page titles. For this service obviously speed is of utmost importance, as responsiveness of the web page is important to the user experience. The current implementation simply loads all concepts into memory in a sorted set, and performs a simple log(n) lookup on a user keystroke. The tailset is then used to provide additional matches beyond the closest match. The problem with this solution is that it does not scale. It currently is running up against the VM heap space limit (I've set -Xmx2g, which is about the most we can push on our 32 bit machines), and this prevents us from expanding our concept table or adding more functionality. Switching to 64-bit VMs on machines with more memory isn't an immediate option. I've been hesitant to start working on a disk-based solution as I am concerned that disk seek time will kill performance. Are there possible solutions that will let me scale better, either entirely in memory or with some fast disk-backed implementations? Edits: @Gandalf: For our use case it is important the the autocompletion is comprehensive and isn't just extra help for the user. As for what we are completing, it is a list of concept-type pairs. For example, possible entries are [("Microsoft", "Software Company"), ("Jeff Atwood", "Programmer"), ("StackOverflow.com", "Website")]. We are using Lucene for the full search once a user selects an item from the autocomplete list, but I am not yet sure Lucene would work well for the autocomplete itself. @Glen: No databases are being used here. When I'm talking about a table I just mean the structured representation of my data. @Jason Day: My original implementation to this problem was to use a Trie, but the memory bloat with that was actually worse than the sorted set due to needing a large number of object references. I'll read on the ternary search trees to see if it could be of use.

    Read the article

  • Internet Explorer buggy when accessing a custom weblogic provider

    - by James
    I've created a custom Weblogic Security Authentication Provider on version 10.3 that includes a custom login module to validate users. As part of the provider, I've implemented the ServletAuthenticationFilter and added one filter. The filter acts as a common log on page for all the applications within the domain. When we access any secured URLs by entering them in the address bar, this works fine in IE and Firefox. But when we bookmark the link in IE an odd thing happens. If I click the bookmark, you will see our log on page, then after you've successfully logged into the system the basic auth page will display, even though the user is already authenticated. This never happens in Firefox, only IE. It's also intermittent. 1 time out of 5 IE will correctly redirect and not show the basic auth window. Firefox and Opera will correctly redirect everytime. We've captured the response headers and compared the success and failures, they are identical. final boolean isAuthenticated = authenticateUser(userName, password, req); // Send user on to the original URL if (isAuthenticated) { res.sendRedirect(targetURL); return; } As you can see, once the user is authenticated I do a redirect to the original URL. Is there a step I'm missing? The authenticateUser() method is taken verbatim from an example in Oracle's documents. private boolean authenticateUser(final String userName, final String password, HttpServletRequest request) { boolean results; try { ServletAuthentication.login(new CallbackHandler() { @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (Callback callback : callbacks) { if (callback instanceof NameCallback) { NameCallback nameCallback = (NameCallback) callback; nameCallback.setName(userName); } if (callback instanceof PasswordCallback) { PasswordCallback passwordCallback = (PasswordCallback) callback; passwordCallback.setPassword(password.toCharArray()); } } } }, request); results = true; } catch (LoginException e) { results = false; } return results; I am asking the question here because I don't know if the issue is with the Weblogic config or the code. If this question is more suited to ServerFault please let me know and I will post there. It is odd that it works everytime in Firefox and Opera but not in Internet Explorer. I wish that not using Internet Explorer was an option but it is currently the company standard. Any help or direction would be appreciated. I have tested against IE 6 & 8 and deployed the custom provider on 3 different environments and I can still reproduce the bug.

    Read the article

  • CQRS - Should a Command try to create a "complex" master-detail entity?

    - by Simon Crabtree
    I've been reading Greg Young and Udi Dahan's thoughts on Command Query Responsibilty Separation and a lot of what I read strikes a chord with me. My domain (we track vehicles which are doing deliveries) has the concept of a Route which contains one or more Stops. I need my customers to be able to set these up in our system by calling a webservice, and then be able to retrieve information about a Route and how the vehicle is progressing. In the past I would have "cut-down" DTO classes which closely resemble my domain classes, and the customer would create a RouteDto with an array of StopDto(s), and call our CreateRoute webmethod, passing in the RouteDto. When they query our system by calling the GetRouteDetails method, I would return exactly the same objects to them. One of the appealing aspects of CQRS is that the RouteDto might have all manner of properties that the customer wants to query, but have no business setting when they create a Route. So I create a separate CreateRouteRequest class which is passed in when calling the CreateRoute "command", and a Route DTO class which gets returned as a query result. class Route{ string Reference; List<Stop> Stops; } But I need my customer to provide me with Route AND Stop details when they create a route. As I see it I could either... Give my CreateRouteRequest class a Stops(s) property which is an array of "something" representing the data they need to provide about each stop - but what do I call this class? It's not a Stop as that's what I'm calling the list of DTO inside my Route DTO, but I don't like "CreateStopRequest". I also wonder if I'm stuck in a CRUD mindset here thinking in terms of master-detail information and asking the customer to think like that too. class CreateRouteRequest{ string Reference; ... List<CreateStopRequest> Stops; } or They call CreateRoute, and then make a number of calls to an AddStopToRoute method. This feels a bit more "behavioural" but I'm going to lose the ability to treat creating a route including its stops as a single atomic command. If they create a Route and then try to add a Stop which fails due to some validation problem they're going to have a partially correct Route. The fact that I can't come up with a good name for the list of "StopCreationData" objects I'd be working with in option 1, makes me wonder if there's something I'm missing.

    Read the article

  • How to successfully implement og:image for the LinkedIn

    - by Sabo
    THE PROBLEM: I am trying, without much success, to implement open graph image on site: http://www.guarenty-group.com/cz/ The homepage is completeply bypassing the og:image tag, where internal pages are reading all images from the site and place og:image as the last option. Other social networks are working fine on both internal pages and homepage. THE CONFIGURATION: I have no share buttons or alike, all I want is to be able to share the link via my profile. The image is well over 300x300px: http://guarenty-group.com/img/gg_seal.png Here is how my head tag looks like: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>Guarenty Group : Pojištení pro nájemce a pronajímatelé</title> <meta name="keywords" content="" /> <meta name="description" content="Guarenty Group pojištuje príjem z nájmu pronajímatelum, kauci nájemcum - aby nemuseli platit velkou cástku v hotovostí predem - a dále nájemcum pojištuje príjmy, aby meli na nájem pri nemoci, úrazu ci nezamestnání." /> <meta name="image_src" content="http://guarenty-group.com/img/gg_seal.png" /> <meta name="image_url" content="http://guarenty-group.com/img/gg_seal.png" /> <meta property="og:title" content="Pojištení pro nájemce a pronajímatelé" /> <meta property="og:url" content="http://guarenty-group.com/cz/" /> <meta property="og:image" content="http://guarenty-group.com/img/gg_seal.png" /> <meta property="og:description" content="Guarenty Group pojištuje príjem z nájmu pronajímatelum, kauci nájemcum - aby nemuseli platit velkou cástku v hotovostí predem - a dále nájemcum pojištuje príjmy, aby meli na nájem pri nemoci, úrazu ci nezamestnání [...]" /> ... </head> THE TESTING RESULTS: In order to trick the cache i have tested the site with http://www.guarenty-group.com/cz/?try=N, where I have changed the N every time. The strange thing is that images found for different value of N is different. Sometimes there is no image, sometimes there is 1, 2 or 3 images, but each time there is a different set of images. But, in any case I could not find the image specified in the og:graph! MY QUESTIONS: https://developer.linkedin.com/documents/setting-display-tags-shares is saying one thing, and the personnel on the support forum is saying "over 300" Does anyone know What is the official minimum dimension of the image (both w and h)? Can an image be too large? Should I use the xmlns, should I not use xmlns or it doesn't matter? What are the maximum (and minimum) lengths for og:title and og:description tags? Any other suggestion is of course welcomed :) Thanks in advance, cheers~

    Read the article

  • How to maxmise the largest contiguous block of memory in the Large Object Heap

    - by Unsliced
    The situation is that I am making a WCF call to a remote server which is returns an XML document as a string. Most of the time this return value is a few K, sometimes a few dozen K, very occasionally a few hundred K, but very rarely it could be several megabytes (first problem is that there is no way for me to know). It's these rare occasions that are causing grief. I get a stack trace that starts: System.OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown. at System.Xml.BufferBuilder.AddBuffer() at System.Xml.BufferBuilder.AppendHelper(Char* pSource, Int32 count) at System.Xml.BufferBuilder.Append(Char[] value, Int32 start, Int32 count) at System.Xml.XmlTextReaderImpl.ParseText() at System.Xml.XmlTextReaderImpl.ParseElementContent() at System.Xml.XmlTextReaderImpl.Read() at System.Xml.XmlTextReader.Read() at System.Xml.XmlReader.ReadElementString() at Microsoft.Xml.Serialization.GeneratedAssembly.XmlSerializationReaderMDRQuery.Read2_getMarketDataResponse() at Microsoft.Xml.Serialization.GeneratedAssembly.ArrayOfObjectSerializer2.Deserialize(XmlSerializationReader reader) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle, XmlDeserializationEvents events) at System.Xml.Serialization.XmlSerializer.Deserialize(XmlReader xmlReader, String encodingStyle) at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall) at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters) I've read around and it is because the Large Object Heap is just getting too fragmented, so even preceding the call with a quick check to StringBuilder.EnsureCapacity just causes the OutOfMemoryException to be thrown earlier (and because I'm guessing at what's needed, it might not actually need that much so my check is causing more problems than it is solving). Some opinions are that there's not much I can do about it. Some of the questions I've asked myself: Use less memory - have you checked for leaks? Yes. The memory usage goes up and down, but there's no fundamental growth that guarantees this to happen. Some of the times it fails, it succeeded at that stage previously. Transfer smaller amounts Not an option, this is a third party web service over which I have no control (or at least it would take a long time to resolve, in the meantime I still have a problem) Can you do something to the LOH to make it less likely to fail? ... now this is most fruitful course. It's a 32-bit process (it has to be for various political, technical and boring reasons) but there's normally hundreds of meg free (multiples of the largest amount for which we've seen failures). Can we monitor the LOH? Using perfmon I can track the size of the heaps, but I don't think there's a way to monitor the largest available contiguous block of memory. Question is: any advice or suggestions for things to try?

    Read the article

  • Deterministic key serialization

    - by Mike Boers
    I'm writing a mapping class which uses SQLite as the storage backend. I am currently allowing only basestring keys but it would be nice if I could use a couple more types hopefully up to anything that is hashable (ie. same requirements as the builtin dict). To that end I would like to derive a deterministic serialization scheme. Ideally, I would like to know if any implementation/protocol combination of pickle is deterministic for hashable objects (e.g. can only use cPickle with protocol 0). I noticed that pickle and cPickle do not match: >>> import pickle >>> import cPickle >>> def dumps(x): ... print repr(pickle.dumps(x)) ... print repr(cPickle.dumps(x)) ... >>> dumps(1) 'I1\n.' 'I1\n.' >>> dumps('hello') "S'hello'\np0\n." "S'hello'\np1\n." >>> dumps((1, 2, 'hello')) "(I1\nI2\nS'hello'\np0\ntp1\n." "(I1\nI2\nS'hello'\np1\ntp2\n." Another option is to use repr to dump and ast.literal_eval to load. This would only be valid for builtin hashable types. I have written a function to determine if a given key would survive this process (it is rather conservative on the types it allows): def is_reprable_key(key): return type(key) in (int, str, unicode) or (type(key) == tuple and all( is_reprable_key(x) for x in key)) The question for this method is if repr itself is deterministic for the types that I have allowed here. I believe this would not survive the 2/3 version barrier due to the change in str/unicode literals. This also would not work for integers where 2**32 - 1 < x < 2**64 jumping between 32 and 64 bit platforms. Are there any other conditions (ie. do strings serialize differently under different conditions)? (If this all fails miserably then I can store the hash of the key along with the pickle of both the key and value, then iterate across rows that have a matching hash looking for one that unpickles to the expected key, but that really does complicate a few other things and I would rather not do it.) Any insights?

    Read the article

  • Web image loaded by thread in android

    - by Bostjan
    I have an extended BaseAdapter in a ListActivity: private static class RequestAdapter extends BaseAdapter { and some handlers and runnables defined in it // Need handler for callbacks to the UI thread final Handler mHandler = new Handler(); // Create runnable for posting final Runnable mUpdateResults = new Runnable() { public void run() { loadAvatar(); } }; protected static void loadAvatar() { // TODO Auto-generated method stub //ava.setImageBitmap(getImageBitmap("URL"+pic)); buddyIcon.setImageBitmap(avatar); } In the getView function of the Adapter, I'm getting the view like this: if (convertView == null) { convertView = mInflater.inflate(R.layout.messageitem, null); // Creates a ViewHolder and store references to the two children views // we want to bind data to. holder = new ViewHolder(); holder.username = (TextView) convertView.findViewById(R.id.username); holder.date = (TextView) convertView.findViewById(R.id.dateValue); holder.time = (TextView) convertView.findViewById(R.id.timeValue); holder.notType = (TextView) convertView.findViewById(R.id.notType); holder.newMsg = (ImageView) convertView.findViewById(R.id.newMsg); holder.realUsername = (TextView) convertView.findViewById(R.id.realUsername); holder.replied = (ImageView) convertView.findViewById(R.id.replied); holder.msgID = (TextView) convertView.findViewById(R.id.msgID_fr); holder.avatar = (ImageView) convertView.findViewById(R.id.buddyIcon); holder.msgPreview = (TextView) convertView.findViewById(R.id.msgPreview); convertView.setTag(holder); } else { // Get the ViewHolder back to get fast access to the TextView // and the ImageView. holder = (ViewHolder) convertView.getTag(); } and the image is getting loaded this way: Thread sepThread = new Thread() { public void run() { String ava; ava = request[8].replace(".", "_micro."); Log.e("ava thread",ava+", username: "+request[0]); avatar = getImageBitmap(URL+ava); buddyIcon = holder.avatar; mHandler.post(mUpdateResults); //holder.avatar.setImageBitmap(getImageBitmap(URL+ava)); } }; sepThread.start(); Now, the problem I'm having is that if there are more items that need to display the same picture, not all of those pictures get displayed. When you scroll up and down the list maybe you end up filling all of them. When I tried the commented out line (holder.avatar.setImageBitmap...) the app sometimes force closes with "only the thread that created the view can request...". But only sometimes. Any idea how I can fix this? Either option.

    Read the article

  • PHP miniwebsever file download

    - by snikolov
    $httpsock = @socket_create_listen("9090"); if (!$httpsock) { print "Socket creation failed!\n"; exit; } while (1) { $client = socket_accept($httpsock); $input = trim(socket_read ($client, 4096)); $input = explode(" ", $input); $input = $input[1]; $fileinfo = pathinfo($input); switch ($fileinfo['extension']) { default: $mime = "text/html"; } if ($input == "/") { $input = "index.html"; } $input = ".$input"; if (file_exists($input) && is_readable($input)) { echo "Serving $input\n"; $contents = file_get_contents($input); $output = "HTTP/1.0 200 OK\r\nServer: APatchyServer\r\nConnection: close\r\nContent-Type: $mime\r\n\r\n$contents"; } else { //$contents = "The file you requested doesn't exist. Sorry!"; //$output = "HTTP/1.0 404 OBJECT NOT FOUND\r\nServer: BabyHTTP\r\nConnection: close\r\nContent-Type: text/html\r\n\r\n$contents"; function openfile() { $filename = "a.pl"; $file = fopen($filename, 'r'); $filesize = filesize($filename); $buffer = fread($file, $filesize); $array = array("Output"=$buffer,"filesize"=$filesize,"filename"=$filename); return $array; } $send = openfile(); $file = $send['filename']; $filesize = $send['filesize']; $output = 'HTTP/1.0 200 OK\r\n'; $output .= "Content-type: application/octet-stream\r\n"; $output .= 'Content-Disposition: attachment; filename="'.$file.'"\r\n'; $output .= "Content-Length:$filesize\r\n"; $output .= "Accept-Ranges: bytes\r\n"; $output .= "Cache-Control: private\n\n"; $output .= $send['Output']; $output .= "Content-Transfer-Encoding: binary"; $output .= "Connection: Keep-Alive\r\n"; } socket_write($client, $output); socket_close ($client); } socket_close ($httpsock); Hello, I am snikolov i am creating a miniwebserver with php and i would like to know how i can send the client a file to download with his browser such as firefox or internet explore i am sending a file to the user to download via sockets, but the cleint is not getting the filename and the information to download can you please help me here,if i declare the file again i get this error in my server Fatal error: Cannot redeclare openfile() (previously declared in C:\User s\fsfdsf\sfdsfsdf\httpd.php:31) in C:\Users\hfghfgh\hfghg\httpd.php on li ne 29, if its possible, i would like to know if the webserver can show much banwdidth the user request via sockets, perl has the same option as php but its more hardcore than php i dont understand much about perl, i even saw that a miniwebserver can show much the client user pulls from the server would it be possible that you can assist me with this coding, i much aprreciate it thank you guys.

    Read the article

  • mounting ext4 fs with block size of 65536

    - by seaquest
    I am doing some benchmarking on EXT4 performance on Compact Flash media. I have created an ext4 fs with block size of 65536. however I can not mount it on ubuntu-10.10-netbook-i386. (it is already mounting ext4 fs with 4096 bytes of block sizes) According to my readings on ext4 it should allow such big block sized fs. I want to hear your comments. root@ubuntu:~# mkfs.ext4 -b 65536 /dev/sda3 Warning: blocksize 65536 not usable on most systems. mke2fs 1.41.12 (17-May-2010) mkfs.ext4: 65536-byte blocks too big for system (max 4096) Proceed anyway? (y,n) y Warning: 65536-byte blocks too big for system (max 4096), forced to continue Filesystem label= OS type: Linux Block size=65536 (log=6) Fragment size=65536 (log=6) Stride=0 blocks, Stripe width=0 blocks 19968 inodes, 19830 blocks 991 blocks (5.00%) reserved for the super user First data block=0 1 block group 65528 blocks per group, 65528 fragments per group 19968 inodes per group Writing inode tables: done Creating journal (1024 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. root@ubuntu:~# tune2fs -l /dev/sda3 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: <not available> Filesystem UUID: 4cf3f507-e7b4-463c-be11-5b408097099b Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 19968 Block count: 19830 Reserved block count: 991 Free blocks: 18720 Free inodes: 19957 First block: 0 Block size: 65536 Fragment size: 65536 Blocks per group: 65528 Fragments per group: 65528 Inodes per group: 19968 Inode blocks per group: 78 Flex block group size: 16 Filesystem created: Sat Feb 5 14:39:55 2011 Last mount time: n/a Last write time: Sat Feb 5 14:40:02 2011 Mount count: 0 Maximum mount count: 37 Last checked: Sat Feb 5 14:39:55 2011 Check interval: 15552000 (6 months) Next check after: Thu Aug 4 14:39:55 2011 Lifetime writes: 70 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: afb5b570-9d47-4786-bad2-4aacb3b73516 Journal backup: inode blocks root@ubuntu:~# mount -t ext4 /dev/sda3 /mnt/ mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so

    Read the article

  • Creating .lib files in CUDA Toolkit 5

    - by user1683586
    I am taking my first faltering steps with CUDA Toolkit 5.0 RC using VS2010. Separate compilation has me confused. I tried to set up a project as a Static Library (.lib), but when I try to build it, it does not create a device-link.obj and I don't understand why. For instance, there are 2 files: A caller function that uses a function f #include "thrust\host_vector.h" #include "thrust\device_vector.h" using namespace thrust::placeholders; extern __device__ double f(double x); struct f_func { __device__ double operator()(const double& x) const { return f(x); } }; void test(const int len, double * data, double * res) { thrust::device_vector<double> d_data(data, data + len); thrust::transform(d_data.begin(), d_data.end(), d_data.begin(), f_func()); thrust::copy(d_data.begin(),d_data.end(), res); } And a library file that defines f __device__ double f(double x) { return x+2.0; } If I set the option generate relocatable device code to No, the first file will not compile due to unresolved extern function f. If I set it to -rdc, it will compile, but does not produce a device-link.obj file and so the linker fails. If I put the definition of f into the first file and delete the second it builds successfully, but now it isn't separate compilation anymore. How can I build a static library like this with separate source files? [Updated here] I called the first caller file "caller.cu" and the second "libfn.cu". The compiler lines that VS2010 outputs (which I don't fully understand) are (for caller): nvcc.exe -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" -clean and the same for libfn, then: nvcc.exe -gencode=arch=compute_20,code=\"sm_20,compute_20\" --use-local-env --cl-version 2010 -ccbin "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin" -rdc=true -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v5.0\include" -G --keep-dir "Debug" -maxrregcount=0 --machine 32 --compile -g -D_MBCS -Xcompiler "/EHsc /W3 /nologo /Od /Zi /RTC1 /MDd " -o "Debug\caller.cu.obj" "G:\Test_Linking\caller.cu" and again for libfn.

    Read the article

  • C++0x: How can I access variadic tuple members by index at runtime?

    - by nonoitall
    I have written the following basic Tuple template: template <typename... T> class Tuple; template <uintptr_t N, typename... T> struct TupleIndexer; template <typename Head, typename... Tail> class Tuple<Head, Tail...> : public Tuple<Tail...> { private: Head element; public: template <uintptr_t N> typename TupleIndexer<N, Head, Tail...>::Type& Get() { return TupleIndexer<N, Head, Tail...>::Get(*this); } uintptr_t GetCount() const { return sizeof...(Tail) + 1; } private: friend struct TupleIndexer<0, Head, Tail...>; }; template <> class Tuple<> { public: uintptr_t GetCount() const { return 0; } }; template <typename Head, typename... Tail> struct TupleIndexer<0, Head, Tail...> { typedef Head& Type; static Type Get(Tuple<Head, Tail...>& tuple) { return tuple.element; } }; template <uintptr_t N, typename Head, typename... Tail> struct TupleIndexer<N, Head, Tail...> { typedef typename TupleIndexer<N - 1, Tail...>::Type Type; static Type Get(Tuple<Head, Tail...>& tuple) { return TupleIndexer<N - 1, Tail...>::Get(*(Tuple<Tail...>*) &tuple); } }; It works just fine, and I can access elements in array-like fashion by using tuple.Get<Index() - but I can only do that if I know the index at compile-time. However, I need to access elements in the tuple by index at runtime, and I won't know at compile-time which index needs to be accessed. Example: int chosenIndex = getUserInput(); cout << "The option you chose was: " << tuple.Get(chosenIndex) << endl; What's the best way to do this?

    Read the article

  • JQuery selfbuild plugin question - default value is overwritten.

    - by Bruno
    Hi jQuery ninjas. I need your help. I have made a really clean and simple example to illustrate my problem. I have build my own jquery plugin: (function($) { $.fn.setColorTest = function(options) { options = $.extend($.fn.setColorTest.defaults,options); return this.each(function() { $(this).css({ 'color': options.color}); }); } $.fn.setColorTest.defaults = { color: '#000' }; })(jQuery); As you can see I'm setting a default color and making it possible for the user to change it. My problem/question is: I have two paragraphs on the same page where I want to use the default color for the first and a different color for the second paragraph: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/> <title>Color Test</title> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript"> $(document).ready(function() { $('a#click').click(function() { $('#test1').setColorTest(); }); $('a#click2').click(function() { $('#test2').setColorTest({color: '#fff666'}); }); }); </script> </head> <body> <a id="click">click here</a> <a id="click2">click here2</a> <p id="test1">Test 1</p> <p id="test2">Test 2</p> </body> </html> My problem is that if I click on the second paragraph (p) that overrides the default color and afterwards clicks on the first p it will use the overwritten color and not the default color for the first p. How can I ensure that the first p always will use the default color? I know I can just define the color for the first p as well but that is not an option here $('#test1').setColorTest('color': '#000'); So what to do?

    Read the article

  • What are the Options for Storing Hierarchical Data in a Relational Database?

    - by orangepips
    Good Overviews One more Nested Intervals vs. Adjacency List comparison: the best comparison of Adjacency List, Materialized Path, Nested Set and Nested Interval I've found. Models for hierarchical data: slides with good explanations of tradeoffs and example usage Representing hierarchies in MySQL: very good overview of Nested Set in particular Hierarchical data in RDBMSs: most comprehensive and well organized set of links I've seen, but not much in the way on explanation Options Ones I am aware of and general features: Adjacency List: Columns: ID, ParentID Easy to implement. Cheap node moves, inserts, and deletes. Expensive to find level (can store as a computed column), ancestry & descendants (Bridge Hierarchy combined with level column can solve), path (Lineage Column can solve). Use Common Table Expressions in those databases that support them to traverse. Nested Set (a.k.a Modified Preorder Tree Traversal) First described by Joe Celko - covered in depth in his book Trees and Hierarchies in SQL for Smarties Columns: Left, Right Cheap level, ancestry, descendants Compared to Adjacency List, moves, inserts, deletes more expensive. Requires a specific sort order (e.g. created). So sorting all descendants in a different order requires additional work. Nested Intervals Combination of Nested Sets and Materialized Path where left/right columns are floating point decimals instead of integers and encode the path information. Bridge Table (a.k.a. Closure Table: some good ideas about how to use triggers for maintaining this approach) Columns: ancestor, descendant Stands apart from table it describes. Can include some nodes in more than one hierarchy. Cheap ancestry and descendants (albeit not in what order) For complete knowledge of a hierarchy needs to be combined with another option. Flat Table A modification of the Adjacency List that adds a Level and Rank (e.g. ordering) column to each record. Expensive move and delete Cheap ancestry and descendants Good Use: threaded discussion - forums / blog comments Lineage Column (a.k.a. Materialized Path, Path Enumeration) Column: lineage (e.g. /parent/child/grandchild/etc...) Limit to how deep the hierarchy can be. Descendants cheap (e.g. LEFT(lineage, #) = '/enumerated/path') Ancestry tricky (database specific queries) Database Specific Notes MySQL Use session variables for Adjacency List Oracle Use CONNECT BY to traverse Adjacency Lists PostgreSQL ltree datatype for Materialized Path SQL Server General summary 2008 offers HierarchyId data type appears to help with Lineage Column approach and expand the depth that can be represented.

    Read the article

  • How is "clean" testing done on the Macintosh without virtualization?

    - by Schnapple
    One of the things I've run across on Windows is when a web browser plugin or program you're developing makes an assumption that something is installed that, by default, isn't always present on Windows. A perfect example would be .NET - a whole lot of people running Windows XP have never installed any versions of .NET and so the installer needs to detect and remedy this if necessary. The way I've been testing this in Windows is to have a virtual machine with a snapshot of a clean, patched, but otherwise untouched install of XP or Vista or 7 or whatever. When I'm done testing I just discard any changes since the snapshot. Works great. I'm now developing something for the Macintosh, a platform which is very new to me, and I'm seeing that virtualization does not appear to be an option. It's explicitly forbidden in the EULA of Mac OS X, it's only allowed from Mac OS X Server, which seeing as how I'm targeting an end product is of no use to me, and the one program I see which can virtualize it - VirtualBox - only supports the server and actively nukes any discussion of running the consumer/client version of Mac OS X. And the only instructions I find anywhere on the topic seem to involve the use of "hacking" programs which is very much incompatible with the full-time gig I'm trying to do this for. So it looks like virtualization is out, but at various points I'm going to want or need to simulate what it's like to install and run this software on a "clean" Macintosh. How do people usually do this? Just buy multiple Macintoshes and use Time Machine? Am I thinking about this all wrong and everything Just Works? To be clear I'm not trying to run Mac OS X on a Windows machine. I have a Macintosh, I'm fine with virtualizing Mac OS X on Apple hardware, I'm just not seeing a route to making the non-Server version do this. I'm aware that Mac OS X Server can be virtualized but that's not what I'm going for. I'm aware that there are unsanctioned/unsupported methods of making Mac OS X run in virtualization programs like VirtualBox but for legal reasons I am not interested in those. My question is not "how can I do this?" but rather "so this thing I do on Windows seems to not be possible, generally, on the Macintosh, so what do people do to achieve what I'm going for?"

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • Save HashMap data into SQLite

    - by Matthew
    I'm Trying to save data from Json into SQLite. For now I keep the data from Json into HashMap. I already search it, and there's said use the ContentValues. But I still don't get it how to use it. I try looking at this question save data to SQLite from json object using Hashmap in Android, but it doesn't help a lot. Is there any option that I can use to save the data from HashMap into SQLite? Here's My code. MainHellobali.java // Hashmap for ListView ArrayList<HashMap<String, String>> all_itemList; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main_helloballi); all_itemList = new ArrayList<HashMap<String, String>>(); // Calling async task to get json new getAllItem().execute(); } private class getAllItem extends AsyncTask<Void, Void, Void> { @Override protected Void doInBackground(Void... arg0) { // Creating service handler class instance ServiceHandler sh = new ServiceHandler(); // Making a request to url and getting response String jsonStr = sh.makeServiceCall(url, ServiceHandler.GET); Log.d("Response: ", "> " + jsonStr); if (jsonStr != null) { try { all_item = new JSONArray(jsonStr); // looping through All Contacts for (int i = 0; i < all_item.length(); i++) { JSONObject c = all_item.getJSONObject(i); String item_id = c.getString(TAG_ITEM_ID); String category_name = c.getString(TAG_CATEGORY_NAME); String item_name = c.getString(TAG_ITEM_NAME); // tmp hashmap for single contact HashMap<String, String> allItem = new HashMap<String, String>(); // adding each child node to HashMap key => value allItem.put(TAG_ITEM_ID, item_id); allItem.put(TAG_CATEGORY_NAME, category_name); allItem.put(TAG_ITEM_NAME, item_name); // adding contact to contact list all_itemList.add(allItem); } } catch (JSONException e) { e.printStackTrace(); } } else { Log.e("ServiceHandler", "Couldn't get any data from the url"); } return null; } } I have DatabasehHandler.java and AllItem.java too. I can put it in here if its necessary. Thanks before ** Add Edited Code ** // looping through All Contacts for (int i = 0; i < all_item.length(); i++) { JSONObject c = all_item.getJSONObject(i); String item_id = c.getString(TAG_ITEM_ID); String category_name = c.getString(TAG_CATEGORY_NAME); String item_name = c.getString(TAG_ITEM_NAME); DatabaseHandler databaseHandler = new DatabaseHandler(this); //error here "The Constructor DatabaseHandler(MainHellobali.getAllItem) is undefined }

    Read the article

  • Simaltaneous connections with PHP and SOAP?

    - by Dov
    I'm new to using SOAP and understanding the utmost basics of it. I create a client resource/connection, I then run some queries in a loop and I'm done. The issue I am having is when I increase the iterations of the loop, ie: from 100 to 1000, it seems to run out of memory and drops an internal server error. How could I possibly run either a) multiple simaltaneous connections or b) create a connection, 100 iterations, close connection, create connection.. etc. "a)" looks to be the better option but I have no clue as to how to get it up and running whilst keeping memory (I assume opening and closing connections) at a minimum. Thanks in advance! index.php <?php // set loops to 0 $loops = 0; // connection credentials and settings $location = 'https://theconsole.com/'; $wsdl = $location.'?wsdl'; $username = 'user'; $password = 'pass'; // include the console and client classes include "class_console.php"; include "class_client.php"; // create a client resource / connection $client = new Client($location, $wsdl, $username, $password); while ($loops <= 100) { $dostuff; } ?> class_console.php <?php class Console { // the connection resource private $connection = NULL; /** * When this object is instantiated a connection will be made to the console */ public function __construct($location, $wsdl, $username, $password, $proxyHost = NULL, $proxyPort = NULL) { if(is_null($proxyHost) || is_null($proxyPort)) $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password)); else $connection = new SoapClient($wsdl, array('login' => $username, 'password' => $password, 'proxy_host' => $proxyHost, 'proxy_port' => $proxyPort)); $connection->__setLocation($location); $this->connection = $connection; return $this->connection; } /** * Will print any type of data to screen, where supported by print_r * * @param $var - The data to print to screen * @return $this->connection - The connection resource **/ public function screen($var) { print '<pre>'; print_r($var); print '</pre>'; return $this->connection; } /** * Returns a server / connection resource * * @return $this->connection - The connection resource */ public function srv() { return $this->connection; } } ?>

    Read the article

  • Problem in databinding a dictionary in ListView combo-box column.

    - by Ashish Ashu
    I have a listview of which itemsource is set to my custom collection, let's say MyCollection. The code below is not full code , it's just a code snippets to explain the problem. class Item : INotifyPropertyChanged { Options _options; public Options OptionProp { get { return _options; } set { _options = value; OnPropertyChanged ("OptionProp");} } string _Name; public string NameProp { get { return _Name; } set { _Name = value; OnPropertyChanged ("NameProp");} } } class Options : Dictionary<string,string> { public Options() { this.Clear(); this.Add("One" , "1" ); this.Add("Two" , "2" ); this.Add("Three" , "3" ); } } MyCollection in my viewModel class viewModel { ObservableCollection<Item> **MyCollection**; KeyValuePair<sting,string> **SelectedOption**; } The listview Item Source is set to my MyCollection. <ListView ItemSource = MyCollectoin> I Listview contains two columns of which I have defined a datatemplats in the listview. First column is a combo-box of which Itemsource is set to Options ( defined above ) Second column is a simple textblock to display Name. Problem 1. I have defined a datatemplate for first column in which I have a combo box , I have set the Itemsource =**MyCollection** and SelectedItem = SelectedOption of the combo-box. User can perform following operations in the listview: Add ( Add the row in the listview ) Move Up ( Move row up in the listview ) Move Down ( Move down the item in the listview ) .Now when I add the row in the listview , the combo-box selected index is always comes to -1 (first column). However the combo box contains options One, Two and Three. Also, I have initialized the SelectedOption to contain the first item, i:e One. problem 2. . Let suppose, I have added a single row in a listview and I have selected Option "one" in the combo box manually. Now when I perform Move Up or Move Down operations the selected index of a combo box is again set to -1. In the Move Up or Move Down operation , I am calling MoveUp and MoveDown methods of the Observable collection. Probelm 3 How to serialize the entire collection in XML. Since I can't serialize the Dictionary and KeyValue Pair. I have to restore the state of the listview.

    Read the article

  • HTML: Include, or exclude, optional closing tags?

    - by Ian Boyd
    Some HTML1 closing tags are optional, i.e.: </HTML> </HEAD> </BODY> </P> </DT> </DD> </LI> </OPTION> </THEAD> </TH> </TBODY> </TR> </TD> </TFOOT> </COLGROUP> Note: Not to be confused with closing tags that are forbidden to be included, i.e.: </IMG> </INPUT> </BR> </HR> </FRAME> </AREA> </BASE> </BASEFONT> </COL> </ISINDEX> </LINK> </META> </PARAM> Note: xhtml is different from HTML. xhtml is a form of xml, which requires every element have a closing tag. A closing tag can be forbidden in html, yet mandatory in xhtml. Are the optional closing tags ideally included, but we'll accept them if you forgot them, or ideally not included, but we'll accept them if you put them in In other words, should i include them, or should i not include them? The HTML 4.01 spec talks about closing element tags being optional, but doesn't say if it's preferable to include them, or preferable to not include them. On the other hand, a random article on DevGuru says: The ending tag is optional. However, it is recommended that it be included. The reason i ask is because you just know it's optional for compatibility reasons; and they would have made them (mandatory | forbidden) if they could have. Put it another way: What did HTML 1, 2, 3 do with regards to these, now optional, closing tags. What does HTML 5 do? And what should i do? Footnotes 1HTML 4.01

    Read the article

  • Some clarification on rvalue references

    - by Dennis Zickefoose
    First: where are std::move and std::forward defined? I know what they do, but I can't find proof that any standard header is required to include them. In gcc44 sometimes std::move is available, and sometimes its not, so a definitive include directive would be useful. When implementing move semantics, the source is presumably left in an undefined state. Should this state necessarily be a valid state for the object? Obviously, you need to be able to call the object's destructor, and be able to assign to it by whatever means the class exposes. But should other operations be valid? I suppose what I'm asking is, if your class guarantees certain invariants, should you strive to enforce those invariants when the user has said they don't care about them anymore? Next: when you don't care about move semantics, are there any limitations that would cause a non-const reference to be preferred over an rvalue reference when dealing with function parameters? void function(T&); over void function(T&&); From a caller's perspective, being able to pass functions temporary values is occasionally useful, so it seems as though one should grant that option whenever it is feasible to do so. And rvalue references are themselves lvalues, so you can't inadvertently call a move-constructor instead of a copy-constructor, or something like that. I don't see a downside, but I'm sure there is one. Which brings me to my final question. You still can not bind temporaries to non-const references. But you can bind them to non-const rvalue references. And you can then pass along that reference as a non-const reference in another function. void function1(int& r) { r++; } void function2(int&& r) { function1(r); } int main() { function1(5); //bad function2(5); //good } Besides the fact that it doesn't do anything, is there anything wrong with that code? My gut says of course not, since changing rvalue references is kind of the whole point to their existence. And if the passed value is legitimately const, the compiler will catch it and yell at you. But by all appearances, this is a runaround of a mechanism that was presumably put in place for a reason, so I'd just like confirmation that I'm not doing anything foolish.

    Read the article

< Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >