Search Results

Search found 39456 results on 1579 pages for 'why do you'.

Page 426/1579 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Is jQuery forcing Adobe ColdFusion to abandon the dead flash product line?

    - by crosenblum
    I have been reading a lot about how flash development/design had died, and as jQuery and in the near future html5 comes out, will this start to push Adobe/Coldfusion away from flash towards less product linking? I mean, I love coldfusion, and want that to continue to grow, however, if Adobe only bought Coldfusion from Macromedia, so they can bundle flash and coldfusion together, does the death of flash mean the death of coldfusion? http://topnews.us/content/221385-jobs-says-adobes-flash-waning-and-had-its-day http://aext.net/2010/03/javascript-jquery-killing-flash-tutorial-jquery-plugin/ I really don't mind if Flash dies, I do mind greatly if coldfusion does. Is the success of Flash linked to Coldfusion? If so, why? or why not? The purpose of this isn't to start some war about flash pro's and con's. I was only worried that Adobe would cause problems for Coldfusion, if flash had some market/financial problems. That was my main concern... And no I am not anti-flash... But my financial sanity depends on Coldfusion being a success, so that is why I stated my question. Because I WANT EVERYONE ELSE'S OPINION OF THIS SITUATION. Thank You.

    Read the article

  • Cleaning up a sparsebundle with a script

    - by nickg
    I'm using time machine to backup some servers to a sparse disk image bundle and I'd like to have a script to clean up the old back ups and re-size the image once space has been freed up. I'm fairly certain the data is protected because if I delete the old backups by right clicking, I have to enter a password to be able to delete them. To allow my script to be able to delete them, I've been running it as root. For some reason, it just won't run and every file it tries to delete I get rm: /file/: Operation not permitted Here is what I have as my script: #!/bin/bash for server in servername; do /usr/bin/hdiutil attach -mountpoint /path/to/mountpoint /path/to/sparsebundle/$server.sparsebundle/; /bin/sleep 10; /usr/bin/find /path/to/mountpoint -type d -mtime +7 -exec /bin/rm -rf {} \; /usr/bin/hdiutil unmount /path/to/mountpoint; /bin/sleep 10; /usr/bin/hdiutil compact /path/to/sparsebundle/$server.sparsebundle/; done exit; One of the problems I thought was causing this was it needed to have a mountpoint specified since the default mount was to /Volumes/Time\ Machine\ Backups/ that's why I created a mountpoint. I also thought that it was trying to delete the files to quickly after mounting and it wasn't actually mounted yet, that's why I added the sleep. I've also tried using the -delete option for find instead of -exec, but it didn't make any difference. Any help on this would be greatly appreciated because I'm out of ideas as to why this won't work.

    Read the article

  • I can not use Session In Page_Load and I got error bellow

    - by LostLord
    hi my dear friends .... why i got this error : Object reference not set to an instance of an object. when i put this code in my page_load.: protected void Page_Load(object sender, EventArgs e) { BackEndUtils.OverallLoader(); string Teststr = Session["Co_ID"].ToString(); } ========================================================================== this session is made when user logins to my web site and this session works in other areas... thanks for your attention ========================================================================== thanks for your answers i removed BackEndUtils.OverallLoader(); but error still exists i tried Teststr = Convert.ToString(Session["Co_ID"]); and error disappeared - but i don't know why that session is null in other areas that session works perfectly = such as a button in that form what is the matter? my web page markup is like this : <%@ Page Title="" Language="C#" MasterPageFile="~/Admin/AdminBackend.Master" AutoEventWireup="true" CodeBehind="Personel.aspx.cs" Inherits="Darman.Admin.Personel" Theme="DefaultTheme" %> ================================================================================= i put this code in a button like this : string Teststr = Convert.ToString(Session["Co_ID"]); when i press that button THIS code in page Load(POSTBACK) + IN Button_Click works perfectly and shows me 23 (my Co_ID) But when i run my page in browser (first time) this code in page load shows me null. why? thanks a lot

    Read the article

  • static block instance block java Order

    - by Rollerball
    Having read this question In what order are the different parts of a class initialized when a class is loaded in the JVM? and the related JLS. I would like to know in more detail why for example having class Animal (superclass) and class Dog (subclass) as following: class Animal { static{ System.out.println("This is Animal's static block speaking"): } { System.out.println("This is Animal's instance block speaking"); } class Dog{ static{ System.out.println("This is Dog's static block speaking"); } { System.out.println("This is Dog's instance block speaking"); } public static void main (String [] args) { Dog dog = new Dog(); } } Ok before instantiating a class its direct superclass needs to be initialized (therefore all the statics variables and block need to be executed). So basically the question is: Why after initializing the static variables and static blocks of the super class, control goes down to the subclass for static variables initialization rather then finishing off the initialization of also the instance member? The control goes like: superclass (Animal): static variables and static blocks subclass (Dog): static variables and static blocks superclass (Animal): instance variables and instance blocks sublcass (Dog):instance variables and instance blocks What is the reason why it is in this way rather than : superclass -> static members superclass -> instance members subclass -> static members sublcass-> instance members

    Read the article

  • Encapsulate update method inside of object or have method which accepts an object to update

    - by Tom
    Hi, I actually have 2 questions related to each other: I have an object (class) called, say MyClass which holds data from my database. Currently I have a list of these objects ( List < MyClass ) that resides in a singleton in a "communal area". I feel it's easier to manage the data this way and I fail to see how passing a class around from object to object is beneficial over a singleton (I would be happy if someone can tell me why). Anyway, the data may change in the database from outside my program and so I have to update the data every so often. To update the list of the MyClass I have a method called say, Update, written in another class which accepts a list of MyClass. This updates all the instances of MyClass in the list. However would it be better instead to encapulate the Update() method inside the MyClass object, so instead I would say foreach(MyClass obj in MyClassList) { obj.update(); } What is a better implementation and why? The update method requires a XML reader. I have written an XML reader class which is basically a wrapper over the standard XML reader the language natively provides which provides application specific data collection. Should the XML reader class be in anyway in the "inheritance path" of the MyClass object - the MyClass objects inherits from the XML reader because it uses a few methods. I can't see why it should. I don't like the idea of declaring an instance of the XML Reader class inside of MyClass and an MyClass object is meant to be a simple "record" from the database and I feel giving it loads of methods, other object instances is a bit messy. Perhaps my XML reader class should be static but C#'s native XMLReader isn't static.? Any comments would be greatly appreciated Thanks Thomas

    Read the article

  • Hibernate MapKeyManyToMany gives composite key where none exists

    - by larsrc
    I have a Hibernate (3.3.1) mapping of a map using a three-way join table: @Entity public class SiteConfiguration extends ConfigurationSet { @ManyToMany @MapKeyManyToMany(joinColumns=@JoinColumn(name="SiteTypeInstallationId")) @JoinTable( name="SiteConfig_InstConfig", joinColumns = @JoinColumn(name="SiteConfigId"), inverseJoinColumns = @JoinColumn(name="InstallationConfigId") ) Map<SiteTypeInstallation, InstallationConfiguration> installationConfigurations = new HashMap<SiteTypeInstallation, InstallationConfiguration>(); ... } The underlying table (in Oracle 11g) is: Name Null Type ------------------------------ -------- ---------- SITECONFIGID NOT NULL NUMBER(19) SITETYPEINSTALLATIONID NOT NULL NUMBER(19) INSTALLATIONCONFIGID NOT NULL NUMBER(19) The key entity used to have a three-column primary key in the database, but is now redefined as: @Entity public class SiteTypeInstallation implements IdResolvable { @Id @GeneratedValue(generator="SiteTypeInstallationSeq", strategy= GenerationType.SEQUENCE) @SequenceGenerator(name = "SiteTypeInstallationSeq", sequenceName = "SEQ_SiteTypeInstallation", allocationSize = 1) long id; @ManyToOne @JoinColumn(name="SiteTypeId") SiteType siteType; @ManyToOne @JoinColumn(name="InstalationRoleId") InstallationRole role; @ManyToOne @JoinColumn(name="InstallationTypeId") InstType type; ... } The table for this has a primary key 'Id' and foreign key constraints+indexes for each of the other columns: Name Null Type ------------------------------ -------- ---------- SITETYPEID NOT NULL NUMBER(19) INSTALLATIONROLEID NOT NULL NUMBER(19) INSTALLATIONTYPEID NOT NULL NUMBER(19) ID NOT NULL NUMBER(19) For some reason, Hibernate thinks the key of the map is composite, even though it isn't, and gives me this error: org.hibernate.MappingException: Foreign key (FK1A241BE195C69C8:SiteConfig_InstConfig [SiteTypeInstallationId])) must have same number of columns as the referenced primary key (SiteTypeInstallation [SiteTypeId,InstallationRoleId]) If I remove the annotations on installationConfigurations and make it transient, the error disappears. I am very confused why it thinks SiteTypeInstallation has a composite key at all when @Id is clearly defining a simple key, and doubly confused why it picks exactly just those two columns. Any idea why this happens? Is it possible that JBoss (5.0 EAP) + Hibernate somehow remembers a mistaken idea of the primary key across server restarts and code redeployments? Thanks in advance, -Lars

    Read the article

  • Old dll.config problem !

    - by user313421
    Since 2005 as I googled it's a problem for who needs to read the configuration of an assembly from it's config file "*.dll.config" and Microsoft didn't do anything yet. Story: If you try to read a setting from a class library (plug-in) you fail. Instead the main application domain (EXE which is using the plug-in) config is read and because probably there's not such a config your plug-in will use default setting which is hard-coded when you create it's settings for first time. Any change to .dll.config wouldn't see by your plug-in and you wonder why it's there! If you want to replace it and start searching you may find something like this: http://stackoverflow.com/questions/594298/c-dll-config-file But just some ideas and one line code. A good replacement for built-in config shouldn't read from file system each time we need a config value, so we can store them in memory; Then what if user changes config file ? we need a FileSystemWatcher and we need some design like singleton ... and finally we are at the same point configuration of .NET is except our one's working. It seems MS did everything but forgot why they built the ".dll.config". Since no DLL is gonna execute by itself, they are referenced from other apps (even if used in web) and so why there's such a "*.dll.config" file ? I'm not gonna argue if it's good to have multiple config files or not. It's my design (plug-able components). Finally { After these years, is there any good practice such as a custom setting class to add in each assemly and read from it's own config file ? }

    Read the article

  • Is it impossible to secure .net code (intellectual property) ?

    - by JL
    I used to work in JavaScript a lot and one thing that really bothered my employers was that the source code was too easy to steal. Even with obfuscation, nothing really helped, because we all knew that any competent developer would be able to read that code if they wanted to. JS Scripts are one thing, but what about SOA projects that have millions invested in IP (Intellectual Property). I love .net, and especially C#, but I recently again had to answer the question "If we give this compiled program over to our clients, can their developers reverse engineer it?" I had gone out of my way to obfuscate the code, but I knew it wouldn't take that much for another determined C# developer to get at the code. So I earnestly pose the question, is it impossible to secure .net code? The considerations I have as as follows: Even regular native executables can be reversed, but not every developer has the skill to be able to do this. Its a lot harder to disassemble a native executable than a .net assembly. Obfuscation will only get you so far, but it does help a little. Why have I never seen any public acknowledgement by Microsoft that anything written in .net is subject to relatively easy IP theft? Why have I never seen a scrap of counter measure training on any Microsoft site? Why does VS come with a community obfuscater as an optional component? Ok maybe I have just had my head in the sand here, but its not exactly high on most developers priority list. Are there any plans to address my concerns in any future version of .net? I'm not knocking .net, but I would like some realistic answers, thank you, question marked as subjective and community!

    Read the article

  • atol(), atof(), atoi() function behaviours, is there a stable way to convert from/to string/integer

    - by Berkay
    In these days i'm playing with the C functions of atol(), atof() and atoi(), from a blog post i find a tutorial and applied: here are my results: void main() char a[10],b[10]; puts("Enter the value of a"); gets(a); puts("Enter the value of b"); gets(b); printf("%s+%s=%ld and %s-%s=%ld",a,b,(atol(a)+atol(b)),a,b,(atol(a)-atol(b))); getch(); } there is atof() which returns the float value of the string and atoi() which returns integer value. now to see the difference between the 3 i checked this code: main() { char a[]={"2545.965"}; printf("atol=%ld\t atof=%f\t atoi=%d\t\n",atol(a),atof(a),atoi(a)); } the output will be atol=2545 atof=2545.965000 atoi=2545 char a[]={“heyyou”}; now when you run the program the following will be the output (why?, is there any solution to convert pure strings to integer?) atol=0 atof=0 atoi=0 the string should contain numeric value now modify this program as char a[]={“007hey”}; the output in this case(tested in Red hat) will be atol=7 atof=7.000000 atoi=7 so the functions has taken 007 only not the remaining part (why?) Now consider this char a[]={“hey007?}; the output of the program will be atol=0 atof=0.000000 atoi=0 So i just want to convert my strings to number and then again to same text, i played with these functions and as you see i'm getting really interesting results? why is that? any other functions to convert from/to string/integer and vice versa?

    Read the article

  • The new operator in C# isn't overriding base class member

    - by Dominic Zukiewicz
    I am confused as to why the new operator isn't working as I expected it to. Note: All classes below are defined in the same namespace, and in the same file. This class allows you to prefix any content written to the console with some provided text. public class ConsoleWriter { private string prefix; public ConsoleWriter(string prefix) { this.prefix = prefix; } public void Write(string text) { Console.WriteLine(String.Concat(prefix,text)); } } Here is a base class: public class BaseClass { protected static ConsoleWriter consoleWriter = new ConsoleWriter(""); public static void Write(string text) { consoleWriter.Write(text); } } Here is an implemented class: public class NewClass : BaseClass { protected new static ConsoleWriter consoleWriter = new ConsoleWriter("> "); } Now here's the code to execute this: class Program { static void Main(string[] args) { BaseClass.Write("Hello World!"); NewClass.Write("Hello World!"); Console.Read(); } } So I would expect the output to be Hello World! > Hello World! But the output is Hello World Hello World I do not understand why this is happening. Here is my thought process as to what is happening: The CLR calls the BaseClass.Write() method The CLR initialises the BaseClass.consoleWriter member. The method is called and executed with the BaseClass.consoleWriter variable Then The CLR calls the NewClass.Write() The CLR initialises the NewClass.consoleWriter object. The CLR sees that the implementation lies in BaseClass, but the method is inherited through The CLR executes the method locally (in NewClass) using the NewClass.consoleWriter variable I thought this is how the inheritance structure works? Please can someone help me understand why this is not working?

    Read the article

  • How to store child objects on GAE using JDO from Scala

    - by Gero
    Hi, I'm have a parent-child relation between 2 classes, but the child objects are never stored. I do get an warning: "org.datanucleus.store.appengine.MetaDataValidator checkForIllegalChildField: Unable to validate relation net.vermaas.kivanotify.model.UserCriteria.internalCriteria" but it is unclear to me why this occurs. Already tried several alternatives without luck. The parent class is "UserCriteria" which has a List of "Criteria" as children. The classes are defined as follows (Scala): class UserCriteria(tu: String, crit: Map[String, String]) extends LogHelper { @PrimaryKey @Persistent{val valueStrategy = IdGeneratorStrategy.IDENTITY} var id = KeyFactory.createKey("UserCriteria", System.nanoTime) @Persistent var twitterUser = tu @Persistent var internalCriteria: java.util.List[Criteria] = flatten(crit) def flatten(crits: Map[String, String]) : java.util.List[Criteria] = { val list = new java.util.ArrayList[Criteria] for (key <- crits.keySet) { list.add(new Criteria(this, key, crits(key))) } list } def criteria: Map[String, String] = { val crits = mutable.Map.empty[String, String] for (i <- 0 to internalCriteria.size-1) { crits(internalCriteria.get(i).name) = internalCriteria.get(i).value } Map.empty ++ crits } // Stripped the equals, canEquals, hashCode, toString code to keep the code snippet short... } @PersistenceCapable @EmbeddedOnly class Criteria(uc: UserCriteria, nm: String, vl: String) { @Persistent var userCriteria = uc @Persistent var name = nm @Persistent var value = vl override def toString = { "Criteria name: " + name + " value: " + value } } Any ideas why the childs are not stored? Or why I get the error message? Thanks, Gero

    Read the article

  • process semaphores linux - wait

    - by coubeatczech
    Hi, I try to code a simple program that starts and waits on the system semaphore until it gets terminated by signal. union semun { int val; struct semid_ds *buf; unsigned short int *array; struct seminfo *__buf; }; int main(){ int semaphores = semget(IPC_PRIVATE,1,IPC_CREAT | 0666); union semun arg; arg.val = 0; semctl(semaphores,0,SETVAL,arg); struct sembuf operations[1]; operations[0].sem_num = 0; operations[0].sem_op = -1; operations[0].sem_flg = 0; semop(semaphores,operations,1); fprintf(stderr,"Why?\n"); return 0; } I expect, that everytime this program gets executed, nothing actually happens and it waits on the semaphore, but everytime it goes through the semaphore and writes Why?. Why?

    Read the article

  • HTML converted to jQuery collection not searchable with selectors?

    - by jimp
    I am trying to dynamically load a page using $.get(), parse the return with var $content = $(data), and ultimately use selectors to find only certain parts of the document. Only I cannot figure out why the jQuery collection returned from $(data) does not find some very basic selectors. I set up a jsFiddle to illustrate the problem using a very small string of HTML. <html> <head> <title>See Our Events</title> </head> <body><div id="content">testing</div></body> </html> I want to find the <title> node. var html = "<html>\n"+ "<head>\n"+ " <title>See Our Events</title>\n"+ "</head>\n"+ "<body><div id=\"content\">testing</div></body>\n"+ "</html>"; var $content = $(html); console.log($content.find('title').length); // Logs 0. Why? If I wrap a <div> around the HTML, then the selector works. (But if you look at the jsFiddle, other variations of the selector still do not work!) var html = "<div><html>\n"+ "<head>\n"+ " <title>See Our Events</title>\n"+ "</head>\n"+ "<body><div id=\"content\">testing</div></body>\n"+ "</html></div>"; var $content = $(html); console.log($content.find('title').length); // Logs 1. Please look at the jsFiddle, too. It contains more examples than my code here to keep the post easier to read. Why does my otherwise very basic selector not return the title node?

    Read the article

  • Can I specify default value?

    - by atch
    Why is it that for user defined types when creating an array of objects every element of this array is initialized with default ctor but when I create built-in type this isn't the case? And second question: is it possible to specify default value to be used while initialize? Something like this (not valid): char* p = new char[size]('\0'); And another question in this topic while I'm with arrays. I suppose that when creating an array of user defined type and knowing the fact that every elem. of this array will be initialized with default value firstly why? If arrays for built in types do not initialize their elems. with their dflts why do they do it for UDT, and secondly: is there a way to switch it off/avoid/circumvent somehow? It seems like bit of a waste if I for example have created an array with size 10000 and then 10000 times dflt ctor will be invoked and I will (later on) overwrite this values anyway. I think that behaviour should be consistent, so either every type of array should be initialized or none. And I think that the behaviour for built-in arrays is more appropriate.

    Read the article

  • Few Basic Questions in Overriding

    - by Dahlia
    I have few problems with my basic and would be thankful if someone can clear this. What does it mean when I say base *b = new derived; Why would one go for this? We very well separately can create objects for class base and class derived and then call the functions accordingly. I know that this base *b = new derived; is called as Object Slicing but why and when would one go for this? I know why it is not advisable to convert the base class object to derived class object (because base class is not aware of the derived class members and methods). I even read in other StackOverflow threads that if this is gonna be the case then we have to change/re-visit our design. I understand all that, however, I am just curious, Is there any way to do this? class base { public: void f(){cout << "In Base";} }; class derived:public base { public: void f(){cout << "In Derived";} }; int _tmain(int argc, _TCHAR* argv[]) { base b1, b2; derived d1, d2; b2 = d1; d2 = reinterpret_cast<derived*>(b1); //gives error C2440 b1.f(); // Prints In Base d1.f(); // Prints In Derived b2.f(); // Prints In Base d1.base::f(); //Prints In Base d2.f(); getch(); return 0; } In case of my above example, is there any way I could call the base class f() using derived class object? I used d1.base()::f() I just want to know if there any way without using scope resolution operator? Thanks a lot for your time in helping me out!

    Read the article

  • Project management and bundling dependencies

    - by Joshua
    I've been looking for ways to learn about the right way to manage a software project, and I've stumbled upon the following blog post. I've learned some of the things mentioned the hard way, others make sense, and yet others are still unclear to me. To sum up, the author lists a bunch of features of a project and how much those features contribute to a project's 'suckiness' for a lack of a better term. You can find the full article here: http://spot.livejournal.com/308370.html In particular, I don't understand the author's stance on bundling dependencies with your project. These are: == Bundling == Your source only comes with other code projects that it depends on [ +20 points of FAIL ] Why is this a problem, (especially given the last point)? If your source code cannot be built without first building the bundled code bits [ +10 points of FAIL ] Doesn't this necessarily have to be the case for software built against 3rd party libs? Your code needs that other code to be compiled into its library before the linker can work? If you have modified those other bundled code bits [ +40 points of FAIL ] If this is necessary for your project, then it naturally follows that you've bundled said code with yours. If you want to customize a build of some lib,say WxWidgets, you'll have to edit that projects build scripts to bulid the library that you want. Subsequently, you'll have to publish those changes to people who wish to build your code, so why not use a high level make script with the params already written in, and distribute that? Furthermore, (especially in a windows env) if your code base is dependent on a particular version of a lib (that you also need to custom compile for your project) wouldn't it be easier to give the user the code yourself (because in this case, it is unlikely that the user will already have the correct version installed)? So how would you respond to these comments, and what points may I be failing to take into consideration? Would you agree or disagree with the author's take (or mine), and why?

    Read the article

  • VS 2008 C++ build output?

    - by STingRaySC
    Why when I watch the build output from a VC++ project in VS do I see: 1Compiling... 1a.cpp 1b.cpp 1c.cpp 1d.cpp 1e.cpp [etc...] 1Generating code... 1x.cpp 1y.cpp [etc...] The output looks as though several compilation units are being handled before any code is generated. Is this really going on? I'm trying to improve build times, and by using pre-compiled headers, I've gotten great speedups for each ".cpp" file, but there is a relatively long pause during the "Generating Code..." message. I do not have "Whole Program Optimization" nor "Link Time Code Generation" turned on. If this is the case, then why? Why doesn't VC++ compile each ".cpp" individually (which would include the code generation phase)? If this isn't just an illusion of the output, is there cross-compilation-unit optimization potentially going on here? There don't appear to be any compiler options to control that behavior (I know about WPO and LTCG, as mentioned above). EDIT: The build log just shows the ".obj" files in the output directory, one per line. There is no indication of "Compiling..." vs. "Generating code..." steps. EDIT: I have confirmed that this behavior has nothing to do with the "maximum number of parallel project builds" setting in Tools - Options - Projects and Solutions - Build and Run. Nor is it related to the MSBuild project build output verbosity setting. Indeed if I cancel the build before the "Generating code..." step, the ".obj" files will not exist for the most recent set of "compiled" files. E.g., if I cancel the build during "c.cpp" above, I will see only "a.obj" and "b.obj".

    Read the article

  • Help me write my LISP :) LISP environments, Ruby Hashes...

    - by MikeC8
    I'm implementing a rudimentary version of LISP in Ruby just in order to familiarize myself with some concepts. I'm basing my implementation off of Peter Norvig's Lispy (http://norvig.com/lispy.html). There's something I'm missing here though, and I'd appreciate some help... He subclasses Python's dict as follows: class Env(dict): "An environment: a dict of {'var':val} pairs, with an outer Env." def __init__(self, parms=(), args=(), outer=None): self.update(zip(parms,args)) self.outer = outer def find(self, var): "Find the innermost Env where var appears." return self if var in self else self.outer.find(var) He then goes on to explain why he does this rather than just using a dict. However, for some reason, his explanation keeps passing in through my eyes and out through the back of my head. Why not use a dict, and then inside the eval function, when a new "sub-environment" needs to be created, just take the existing dict and update the key/value pairs that need to be updated, and pass that new dict into the next eval? Won't the Python interpreter keep track of the previous "outer" envs? And won't the nature of the recursion ensure that the values are pulled out from "inner" to "outer"? I'm using Ruby, and I tried to implement things this way. Something's not working though, and it might be because of this, or perhaps not. Here's my eval function, env being a regular Hash: def eval(x, env = $global_env) ........ elsif x[0] == "lambda" then ->(*args) { eval(x[2], env.merge(Hash[*x[1].zip(args).flatten(1)])) } ........ end The line that matters of course is the "lambda" one. If there is a difference, what's importantly different between what I'm doing here and what Norvig did with his Env class? If there's no difference, then perhaps someone can enlighten me as to why Norvig uses the Env class. Thanks :)

    Read the article

  • Sending files using Winsock - optimal send() data length?

    - by Meta
    I am using Winsock with non-blocking sockets to send a file to a client. The way I'm doing it right now is that I read a chunk of 8192 bytes from the file, and then loop until all of it successfully goes through send() (obviously handling WSAEWOULDBLOCK as it occurs). I then move on and read the next 8192 bytes, and so on... Although I can use any other number than 8192 when I test the transfer on my local machine, once I try it over a network, it seems like 8191 is the largest number I can use. When I try to use any number higher than 8191 (starting with 8192), the file transfer becomes extremely slow (about 5 times slower). Is there any reason why 8191 is so special? I've done some more testing and it turns out that using 8000 is slightly faster (by 0.5%). If you understand why 8191 is so special, can you tell me if there is a number better than the others (better than 8000)? I have a feeling that it has something to do with the fact that the default send buffer allocated to the socket by Winsock is 8KB, but I don't understand why. It might also have something to do with the Nagle algorithm, but again, I'm not sure how. Note that I have not modified the SO_SNDBUF option nor the TCP_NODELAY option. Or am I doing this all wrong? What's the best way of sending a file over a non-blocking socket?

    Read the article

  • Dynamic/Generic ViewModelBase?

    - by Shimmy
    I am learning MVVM now and I understand few things (more than but few are here..): Does every model potentially exposed (thru a VM) to the View is having a VM? For example, if I have a Contact and Address entity and each contact has an Addresses (many) property, does it mean I have to create a ContactViewModel and an AddressViewModel etc.? Do I have to redeclare all the properties of the Model again in the ViewModel (i.e. FirstName, LastName blah blah)? why not have a ViewModelBase and the ContactViewMode will be a subclass of ViewModelBase accessing the Entity's properties itself? and if this is a bad idea that the View has access to the entity (please explain why), then why not have the ViewModelBase be a DynamicObject (view the Dictionary example @ the link page), so I don't have to redeclare all the properties and validation over and over in the two tiers (M & VM) - because really, the View is anyway accessing the ViewModel's fields via reflection anyway. I think MVVM was the hardest technology I've ever learned. it doesn't have out-the-box support and there are to many frameworks and methods to achieve it, and in the other hand there is no arranged way to learn it (as MVC for instance), learning MVVM means browsing and surfing around trying to figure out what's better. Bottom line, what I mean by this section is please go and vote to MSFT to add MVVM support in the BCL and generators for VMs and Vs according to the Ms. Thanks

    Read the article

  • Sending buffered images between Java client and Twisted Python socket server

    - by PattimusPrime
    I have a server-side function that draws an image with the Python Imaging Library. The Java client requests an image, which is returned via socket and converted to a BufferedImage. I prefix the data with the size of the image to be sent, followed by a CR. I then read this number of bytes from the socket input stream and attempt to use ImageIO to convert to a BufferedImage. In abbreviated code for the client: public String writeAndReadSocket(String request) { // Write text to the socket BufferedWriter bufferedWriter = new BufferedWriter(new OutputStreamWriter(socket.getOutputStream())); bufferedWriter.write(request); bufferedWriter.flush(); // Read text from the socket BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(socket.getInputStream())); // Read the prefixed size int size = Integer.parseInt(bufferedReader.readLine()); // Get that many bytes from the stream char[] buf = new char[size]; bufferedReader.read(buf, 0, size); return new String(buf); } public BufferedImage stringToBufferedImage(String imageBytes) { return ImageIO.read(new ByteArrayInputStream(s.getBytes())); } and the server: # Twisted server code here # The analog of the following method is called with the proper client # request and the result is written to the socket. def worker_thread(): img = draw_function() buf = StringIO.StringIO() img.save(buf, format="PNG") img_string = buf.getvalue() return "%i\r%s" % (sys.getsizeof(img_string), img_string) This works for sending and receiving Strings, but image conversion (usually) fails. I'm trying to understand why the images are not being read properly. My best guess is that the client is not reading the proper number of bytes, but I honestly don't know why that would be the case. Side notes: I realize that the char[]-to-String-to-bytes-to-BufferedImage Java logic is roundabout, but reading the bytestream directly produces the same errors. I have a version of this working where the client socket isn't persistent, ie. the request is processed and the connection is dropped. That version works fine, as I don't need to care about the image size, but I want to learn why the proposed approach doesn't work.

    Read the article

  • Pointer arithmetic.

    - by Knowing me knowing you
    Having code: int** a = new int*[2]; a[0] = new int(1); a[1] = new int(2); cout << "a[0] " << a[0] << '\n'; cout << "a[1] " << a[1] << '\n'; cout << "a[2] " << a[2] << '\n'; cout << "a[0] + 1 " << a[0] + 1 << '\n';//WHY THIS ISN'T == a[1] ? cout << "*(a + 1): " << *(a + 1) << '\n'; //WHY THIS IS == a[1] ? cout << "a[0] - a[1] " << static_cast<int>(a[0] - a[1])<< '\n';//WHY THIS IS == 16 not 4? cout << sizeof(int**); Questions are included right next to relevant lines in code.

    Read the article

  • What is it in the CSS/DOM that prevents an input box with display: block from expanding to the size of its container

    - by Steven Xu
    Sample HTML/CSS: <div class="container"> <input type="text" /> <div class="filler"></div> </div> div.container { padding: 5px; border: 1px solid black; background-color: gray; } div.filler { background-color: red; height: 5px; } input { display: block; } http://jsfiddle.net/bPEkb/3/ Question Why doesn't the input box expand to have the same outer width as, say div.filler? That is to say, why doesn't the input box expand to fit its container like other block elements with width: auto; do? I tried checking the "User Agent CSS" in Firebug to see if I could come up with something there. No luck. I couldn't find any specific differences in CSS that I could specifically link to the input box behaving differently from the regular div.filler. Besides curiousity, I'd like to know why this is to get to the bottom of it to figure out a way to set width once and forget it. My current practice of explicitly setting the width of both input and its containing block element seems redundant and less than modular. While I'm familiar with the technique of wrapping the input element in a div then assigning to the input element negative margins, this seems quite undesirable.

    Read the article

  • multiple definition in header file

    - by Jérôme
    Here is a small code-example from which I'd like to ask a question : complex.h : #ifndef COMPLEX_H #define COMPLEX_H #include <iostream> class Complex { public: Complex(float Real, float Imaginary); float real() const { return m_Real; }; private: friend std::ostream& operator<<(std::ostream& o, const Complex& Cplx); float m_Real; float m_Imaginary; }; std::ostream& operator<<(std::ostream& o, const Complex& Cplx) { return o << Cplx.m_Real << " i" << Cplx.m_Imaginary; } #endif // COMPLEX_H complex.cpp : #include "complex.h" Complex::Complex(float Real, float Imaginary) { m_Real = Real; m_Imaginary = Imaginary; } main.cpp : #include "complex.h" #include <iostream> int main() { Complex Foo(3.4, 4.5); std::cout << Foo << "\n"; return 0; } When compiling this code, I get the following error : multiple definition of operator<<(std::ostream&, Complex const&) I've found that making this fonction inline solves the problem, but I don't understand why. Why does the compiler complain about multiple definition ? My header file is guarded (with #define COMPLEX_H). And, if complaining about the operator<< fonction, why not complain about the public real() fonction, which is defined in the header as well ? And is there another solution as using the inline keyword ?

    Read the article

  • PHP Sessions and Passing Session ID

    - by Jason McCreary
    I have an API where I am passing the session id back and forth between calls. I set up the session like so: // start API session session_name('apikey'); session_id($data['apikey']); // required to link session session_start(); Although I named my session and am passing the session id via GET and POST using the name, PHP does not automatically resume that session. It always creates a new one unless I set the explicitly set the session id. I found some old user comments on www.php.net that said unless the session id is the first parameter PHP won't set it automatically. This seems odd, but even when I call tried it still didn't work: rest_services.php?apikey=sdr6d3subaofcav53cpf71j4v3&q=testing I have used PHP for years, but am a little confused on why I needed to explicitly set the session with session_id() when I am naming the session and passing it's key accordingly. UPDATE It seems I wasn't clear. My question is why is setting the session ID with session_id() required when I am passing the id, using the session name apikey, via $_GET or $_POST. Theoretically this is no different than PHP's SID when cookies are disabled. But for me it doesn't work unless I explicitly set the session ID. Why?

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >