Search Results

Search found 666 results on 27 pages for 'disadvantages'.

Page 24/27 | < Previous Page | 20 21 22 23 24 25 26 27  | Next Page >

  • Exposed onsite vs IFD deployments for MS Dynamics CRM

    - by Greg McGuffey
    I'm working for the first time on a MS Dyanmics CRM 4.0 project. Our company has a high number of remote employees and even more remote consultants. As such it will be necessary to make the CRM solution available over the internet. As near as I can tell, I have three options: Have everyone use a VPN to access an intranet site (typical onsite deployment). However, we have found that VPNs are far from trouble free and cause many support issues. We avoid them like the plague. Use IFD to expose the CRM on the internet. I don't know much about this except that the URL will be different than the onsite URL, which could cause some headaches (see below). Expose the CRM site by opening the site to the internet, using SSL to encrypt traffic. We currently do this with our MS sharepoint sites. I'm not sure how secure this would be (one of the reasons for this question). I'd like to avoid using both the onsite intranet deployment and the IFD together for a couple of reasons. One of the requests for the solution is use email to notify users that they've been assigned a task, and include the URL to the task within the email. For this reason. If both deployments are used, then I'll need to include two URLs and the user would need to know which to use. Which leads to the second reason, the main users of the solution split time between being in the office and being remote. Thus they would need to access the solution two different ways, and know when to use which. Bad. So, what are the advantages/disadvantages of any of these methods? Any other options? Is there any issue using IFD from within the intranet? Security issues? Thanks!

    Read the article

  • StrcutureMap Wiring - Sanity Check Please

    - by Steve Ward
    Hi - Im new to IOC and StructureMap and have an n-level application and am looking at how to setup the wirings (ForRequestedType ...) and just want to check with people with more experience that this is the best way of doing it! I dont want my UI application object to reference my persistence layer directly so am not able to wire everything up in this UI project. I now have it working by defining a Registry class in each project which wires up the types in the project as needed. The layer above registers its types and also calls the assembly below and looks for registries so that all types are registered throught the hierrachy. E.g. I have UI, Service, Domain, and Persistence libraries. In my service layer the registry looks like Scan(x => { x.Assembly("MyPersistenceProject"); x.LookForRegistries(); }); ForRequestedType<IService>().TheDefault.Is.OfConcreteType<MyService>(); Is this a recommended way of doing this in a setup such as this? Are there better ways and what are the advantages / disadvantages of these approaches in this case?

    Read the article

  • how to implement enhanced session handling in PHP

    - by praksant
    Hi, i'm working with sessions in PHP, and i have different applications on single domain. Problem is, that cookies are domain specific, and so session ids are sent to any page on single domain. (i don't know if there is a way to make cookies work in different way). So Session variables are visible in every page on this domain. I'm trying to implement custom session manager to overcome this behavior, but i'm not sure if i'm thinking about it right. I want to completely avoid PHP session system, and make a global object, which would store session data and on the end of script save it to database. On first access i would generate unique session_id and create a cookie On the end of script save session data with session_id, timestamps for start of session and last access, and data from $_SERVER, such as REMOTE_ADDR, REMOTE_PORT, HTTP_USER_AGENT. On every access chceck database for session_id sent in cookie from client, check IP, Port and user agent (for security) and read data into session variable (if not expired). If session_id expired, delete from database. That session variable would be implemented as singleton (i know i would get tight coupling with this class, but i don't know about better solution). I'm trying to get following benefits: Session variables invisible in another scripts on the same server and same domain Custom management of session expiration Way to see open sessions (something like list of online users) i'm not sure if i'm overlooking any disadvantages of this solution. Is there any better way? Thank you!!

    Read the article

  • Use ini/appconfig file or sql server file to store user config?

    - by h2g2java
    I know that the preference for INI or appconfig XML is their human readability. Let's say user preferences stored for my app are hierarchical and numbers about a thousand items and it would be really confusing for a user to edit an INI to change things anyway. I have always been using a combination of INI with appconfig. I am leaning towards using sql server db file, now. Every time the user changes a preference while using the app, it would be stored into the db file - that's my line of thought. I am also thinking that such a config db file could move around with the app too, just like an INI. Before I do that, any advice 1. If there are any disadvantages against using a db file over INI or appconfig. 2. If a shop uses mysql or oracle, do you think your colleagues would lift up their pro-mysql or pro-oracle eyebrow questioning why you would use sql server technology in a mysql or oracle shop? I mean, I am just using it like an INI file or app.config anyway, right?

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • How can I access a parent DOM from an iframe on a different domain?

    - by Dexter
    I have a website and my domain is registered through Network Solutions. I'm using their Web Forwarding feature which allows me to "mask" my domain so that when a user visits http://lucasmccoy.com they are actually seeing http://lucasmccoy.comlu.com/ through an HTML frame. The advantages of this are that the address bar still shows http://lucasmccoy.com/. The disadvantages are that I cannot directly edit the HTML page in which the frame is owned. For example, I cannot change the page title or favicon. I have tried doing it like so: $(function() { parent.document.title = 'Lucas McCoy'; }); But of course this gives me a JavaScript error: Unsafe JavaScript attempt to access frame with URL http://lucasmccoy.com/ from frame with URL http://lucasmccoy.comlu.com/. Domains, protocols and ports must match. I looked at this question attempting to do the same thing except the OP has access to the other pages HTML whereas I do not. Is there anyway in JavaScript/jQuery to make a cross-domain request to the DOM when you don't have access to that domain? Or is this something browsers just will not let happen for security reasons.

    Read the article

  • Should I return IEnumerable<T> or IQueryable<T> from my DAL?

    - by Gary '-'
    I know this could be opinion, but I'm looking for best practices. As I understand, IQueryable implements IEnumerable, so in my DAL, I currently have method signatures like the following: IEnumerable<Product> GetProducts(); IEnumerable<Product> GetProductsByCategory(int cateogoryId); Product GetProduct(int productId); Should I be using IQueryable here? What are the pros and cons of either approach? Note that I am planning on using the Repository pattern so I will have a class like so: public class ProductRepository { DBDataContext db = new DBDataContext(<!-- connection string -->); public IEnumerable<Product> GetProductsNew(int daysOld) { return db.GetProducts() .Where(p => p.AddedDateTime > DateTime.Now.AddDays(-daysOld )); } } Should I change my IEnumerable<T> to IQueryable<T>? What advantages/disadvantages are there to one or the other?

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • Component based web project directory layout with git and symlinks

    - by karlthorwald
    I am planning my directory structure for a linux/apache/php web project like this: Only www.example.com/webroot/ will be exposed in apache www.example.com/ webroot/ index.php comp1/ comp2/ component/ comp1/ comp1.class.php comp1.js comp2/ comp2.class.php comp2.css lib/ lib1/ lib1.class.php the component/ and lib/ directory will only be in the php path. To make the css and js files visible in the webroot directory I am planning to use symlinks. webroot/ index.php comp1/ comp1.js (symlinked) comp2/ comp2.css (symlinked) I tried following these principles: layout by components and libraries, not by file type and not by "public' or 'non public', index.php is an exception. This is for easier development. symlinking files that need to be public for the components and libs to a public location, but still mirroring the layout. So the component and library structure is also visible in the resulting html code in the links, which might help development. git usage should be safe and always work. it would be ok to follow some procedure to add a symlink to git, but after that checking them out or changing branches should be handled safely and clean How will git handle the symlinking of the single files correctly, is there something to consider? When it comes to images I will need to link directories, how to handle that with git? component/ comp3/ comp3.class.php img/ img1.jpg img2.jpg img3.jpg They should be linked here: webroot/ comp3/ img/ (symlinked ?) If using symlinks for that has disadvantages maybe I could move images to the webroot/ tree directly, which would break the first principle for the third (git practicability). So this is a git and symlink question. But I would be interested to hear comments about the php layout, maybe you want to use the comment function for this.

    Read the article

  • Encapsulating user input of data for a class (C++)

    - by Dr. Monkey
    For an assignment I've made a simple C++ program that uses a superclass (Student) and two subclasses (CourseStudent and ResearchStudent) to store a list of students and print out their details, with different details shown for the two different types of students (using overriding of the display() method from Student). My question is about how the program collects input from the user of things like the student name, ID number, unit and fee information (for a course student) and research information (for research students): My implementation has the prompting for user input and the collecting of that input handled within the classes themselves. The reasoning behind this was that each class knows what kind of input it needs, so it makes sense to me to have it know how to ask for it (given an ostream through which to ask and an istream to collect the input from). My lecturer says that the prompting and input should all be handled in the main program, which seems to me somewhat messier, and would make it trickier to extend the program to handle different types of students. I am considering, as a compromise, to make a helper class that handles the prompting and collection of user input for each type of Student, which could then be called on by the main program. The advantage of this would be that the student classes don't have as much in them (so they're cleaner), but also they can be bundled with the helper classes if the input functionality is required. This also means more classes of Student could be added without having to make major changes to the main program, as long as helper classes are provided for these new classes. Also the helper class could be swapped for an alternative language version without having to make any changes to the class itself. What are the major advantages and disadvantages of the three different options for user input (fully encapsulated, helper class or in the main program)?

    Read the article

  • jQuery - Creating a dynamic content loader using $.get()

    - by Kenny Bones
    Hello everybody! (hello dr.Nick) :) So I posted a question yesterday about a content loader plugin for jQuery I thought I'd use, but didn't get it to work. http://stackoverflow.com/questions/2469291/jquery-could-use-a-little-help-with-a-content-loader Although it works now, I see some disadvantages to it. It requires heaploads of files where the content is in. Since the code essentially picks up the url in the href link and searches that file for a div called #content What I would really like to do is to collect all of these files into a single file and give each div/container it's unique ID and just pick up the content from those. So i won't need so many separate files laying around. Nick Craver thought I should use $.get()instead since it's got a descent callback. But I'm not that strong in js at all.. And I don't even know what this means. I'm basically used to Visual Basic and passing of arguments, storing in txt files etc. Which is really not suitable for this purpose. So what's the "normal" way of doing things like this? I'm pretty sure I'm not the only one who's thought of this right? I basically want to get content from a single php file that contains alot of divs with unique IDs. And without much hassle, fade out the existing content in my main page, pick up the contents from the other file and fade it into my main page.

    Read the article

  • Touch draw in Quatz 2D/Core Graphics

    - by OgreSwamp
    Hello, I'm trying to implement "hand draw tool". At the moment algorythm looks like that (I don't insert any code because methods are quite big, will try to explain an idea): Drawing In touchesStarted: method I create NSMutableArray *pointsArray and add point into it. Call setNeedsDisplay: method. In touchesMoved: method I calculate points between last added point from the pointsArray and current point. Add all points to the pointsArray. Call setNeedsDisplay: method. In touchesFinished: event I calculate points between last added point from the array and current point. Set flag touchesWereFinished. Call setNeedsDisplay:. Render: drawRect: method checks is pointsArray != nil and is there any data in it. If there is - it starts to traw circles in each point of this array. If flag touchesWereFinished is set - save current context to the UIImage, release pointsArray, set it to nil and reset the flag. There are a lot disadvantages of this method: It is slow It becomes extremely slow when user touches and move finger for long time. Array becomes enormous "Lines" composed by circles are ugly I would like to change my algorithm to make it bit faster and line smoother. In result I would like to have lines like on the picture at following URL (sorry, not enough reputation to insert an image): http://2.bp.blogspot.com/_r5VzEAUYXJ4/SrOYp8tJCPI/AAAAAAAAAMw/ZwDKXiHlhV0/s320/SketchBook+Mobile(4).png Can you advice me, ho I can draw lines this way (smooth and slim on the edges)? I thought to draw circles with alpha gradient on the edges (to make lines smoother), but it will be extremely slowly IMHO. Thanks for help

    Read the article

  • Queries within queries: Is there a better way?

    - by mririgo
    As I build bigger, more advanced web applications, I'm finding myself writing extremely long and complex queries. I tend to write queries within queries a lot because I feel making one call to the database from PHP is better than making several and correlating the data. However, anyone who knows anything about SQL knows about JOINs. Personally, I've used a JOIN or two before, but quickly stopped when I discovered using subqueries because it felt easier and quicker for me to write and maintain. Commonly, I'll do subqueries that may contain one or more subqueries from relative tables. Consider this example: SELECT (SELECT username FROM users WHERE records.user_id = user_id) AS username, (SELECT last_name||', '||first_name FROM users WHERE records.user_id = user_id) AS name, in_timestamp, out_timestamp FROM records ORDER BY in_timestamp Rarely, I'll do subqueries after the WHERE clause. Consider this example: SELECT user_id, (SELECT name FROM organizations WHERE (SELECT organization FROM locations WHERE records.location = location_id) = organization_id) AS organization_name FROM records ORDER BY in_timestamp In these two cases, would I see any sort of improvement if I decided to rewrite the queries using a JOIN? As more of a blanket question, what are the advantages/disadvantages of using subqueries or a JOIN? Is one way more correct or accepted than the other?

    Read the article

  • Mysql for update

    - by shantanuo
    MySQL supports "for update" keyword. Here is how I tested that it is working as expected. I opened 2 browser tabs and executed the following commands in one window. mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select * from myxml where id = 2 for update; .... mysql> update myxml set id = 3 where id = 2 limit 1; Query OK, 1 row affected, 1 warning (0.00 sec) Rows matched: 1 Changed: 1 Warnings: 0 mysql> commit; Query OK, 0 rows affected (0.08 sec) In another window, I started the transaction and tried to take an update lock on the same record. mysql> start transaction; Query OK, 0 rows affected (0.00 sec) mysql> select * from myxml where id = 2 for update; Empty set (43.81 sec) As you can see from the above example, I could not select the record for 43 seconds as the transaction was being processed by another application in the Window No 1. Once the transaction was over, I got to select the record, but since the id 2 was changed to id 3 by the transaction that was executed first, no record was returned. My question is what are the disadvantages of using "for update" syntax? If I do not commit the transaction that is running in window 1 will the record be locked for-ever?

    Read the article

  • Should conditional expressions go inside or outside of classes?

    - by Rupert
    It seems that often I will want to execute some methods from a Class when I call it and choosing which function will depend on some condition. This leads me to write classes like in Case 1 because it allows me to rapidly include their functionality. The alternative would be Case 2 which can take a lot of time if there is a lot of code and also means more code being written twice when I drop the Class into different pages. Having said that, Case 1 feels very wrong for some reason that I can't quite put my finger on. I haven't really seen any classes written like this, I suppose. Is there anything wrong with writing classes like in Case 1 or is Case 2 superior? Or is there a better way? What the advantages and disadvantages of each? Case 1 class Foo { public function __construct($bar) { if($bar = 'action1') $this->method1(); else if($bar = 'action2') $this->method2(); else $this->method1(); } public function method1() { } public function method2() { } } $bar = 'action1' $foo = new Foo($bar); Case 2 class Foo { public function __construct() { } public function method1() { } public function method2() { } } $foo = new Foo; $bar = 'action1'; if($bar == 'action1') $foo->method1(); else if($bar == 'action2') $foo->method2(); else $foo->method1();

    Read the article

  • Want to add a functional language to my toolchest. Haskell or Erlang?

    - by sean.johnson
    I've been an OO/procedural guy my whole career except in school where I did a lot of logic programming (Prolog). I work on an amazing variety of projects (freelancer) and so I don't want the tools I know and understand to hold me back from using the right tool for the job. I've decided I should know a functional programming language. I've narrowed the field to Haskell and Erlang. What are the pros and cons, advantages and disadvantages, and major trade offs of Haskell and Erlang? How do I decide in a rational way, which is the better path? This is a big time investment, so I'd like to chose wisely. Is there a good case to be made for something else entirely? F#, Scala Ocaml? (BTW, I'm normally a Ruby/C/Obj.C guy, so I'm not terribly impressed or dependent on the JVM as a runtime. It's completely neutral to me. It's a fine runtime, I don't hold it for or against a language. I don't use Microsoft products though, so a .NET runtime would be a negative.)

    Read the article

  • Java: any problems/negative sides of keeping SoftReference to ArrayList in HttpSession?

    - by westla7
    My code is doing the following (just as an example, and the reason that I specify package path to java.lang.ref.SoftReference is to note that it's not my own implementaiton :-): ... List<String> someData = new ArrayList<String>(); someData.add("Value1"); someData.add("Value2"); ... java.lang.ref.SoftReference softRef = new SoftReference(someData); ... HttpSession session = request.getSession(true); session.setAttribute("mySoftRefData", softRef); ... and later: ... java.lang.ref.SoftReference softRef = session.getAttribute("mySoftRefData"); if (softRef != null && softRef.get() != null) { List<String> someData = (List<String>)softRef.get(); // do something with it. } ... Any disadvantages? Which I do not see? Thank you!

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • assembling an object graph without an ORM -- in the service layer or data layer?

    - by Hans Gruber
    At my current gig, our persistence layer uses IBatis going against SQL Server stored procedures (puke). IMHO, this approach has many disadvantages over the use of a "true" ORM such NHibernate or EF, but the one I'm trying to address here revolves around all the boilerplate code needed to map data from a result set into an object graph. Say I have the following DTO object graph I want to return to my presentation layer: IEnumerable<CustomerDTO> |--> IEnumerable<AddressDTO> |--> LatestOrderDTO The way I've implemented this is to have a discrete method in my DAO class to return each IEnumerable<*DTO>, and then have my service class be responsible for orchestrating the calls to the DAO. It then returns the fully assembled object graph to the client: public class SomeService(){ public SomeService(IDao someDao){ this._someDao = someDao; } public IEnumerable<CustomerDTO> ListCustomersForHistory(int brokerId){ var customers = _someDao.ListCustomersForBroker(brokerId); foreach (customer in customers){ customer.Addresses = someDao.ListCustomersAddresses(brokerId); customer.LatestOrder = someDao.GetCustomerLatestOrder(brokerId); } } return customers; } My question is should this logic belong in the service layer or the should I make my DAO such that it instead returns the assembled object graph. If I was using NHibernate, I assume that this kind of relationship association between objects comes for "free"?

    Read the article

  • Best (Java) book for understanding 'under the bonnet' for programming?

    - by Ben
    What would you say is the best book to buy to understand exactly how programming works under the hood in order to increase performance? I've coded in assembly at university, I studied computer architecture and I obviously did high level programming, but what I really dont understand is things like: -what is happening when I perform a cast -whats the difference in performance if I declare something global as opposed to local? -How does the memory layout for an ArrayList compare with a Vector or LinkedList? -Whats the overhead with pointers? -Are locks more efficient than using synchronized? -Would creating my own array using int[] be faster than using ArrayList -Advantages/disadvantages of declaring a variable volatile I have got a copy of Java Performance Tuning but it doesnt go down very low and it contains rather obvious things like suggesting a hashmap instead of using an ArrayList as you can map the keys to memory addresses etc. I want something a bit more Computer Sciencey, linking the programming language to what happens with the assembler/hardware. The reason im asking is that I have an interview coming up for a job in High Frequency Trading and everything has to be as efficient as possible, yet I cant remember every single possible efficiency saving so i'd just like to learn the fundamentals. Thanks in advance

    Read the article

  • General web service ideas

    - by user2014175
    I have a question regarding different types of web services. I'll preface this by saying that I have built a number of apps (for both ios and android) for personal use that interact with the web via php and sql. I have taught myself these languages, and as such don't have the broader background knowledge that many of you do. My question is, in what other ways can you perform an interaction between a web service and a mobile device other than mobile - php - sql - etc. For example, If I built a very simple tracking app for my car, my current method would be to push GPS coordinates from my iphone to my database at a set interval, then I would write a simple bit of javascript that pulled those coordinates out of the database and superimposed them on a google map. Is there a different way to do this? Such as the server acting as a live middle man who simple pushed the coordinates directly to a target browser? Without the database in the middle? If so, are there advantages and disadvantages to these different methods to achieve different goals? I know its a broad question but I'm really intrigued and I'm finding it difficult to word a google search for it. Any info / reading material suggesting would be excellent. Thanks

    Read the article

  • Best solution for __autoload

    - by tpk
    As our PHP5 OO application grew (in both size and traffic), we decided to revisit the __autoload() strategy. We always name the file by the class definition it contains, so class Customer would be contained within Customer.php. We used to list the directories in which a file can potentially exist, until the right .php file was found. This is quite inefficient, because you're potentially going through a number of directories which you don't need to, and doing so on every request (thus, making loads of stat() calls). Solutions that come to my mind... -use a naming convention that dictates the directory name (similar to PEAR). Disadvantages: doesn't scale too great, resulting in horrible class names. -come up with some kind of pre-built array of the locations (propel does this for its __autoload). Disadvantage: requires a rebuild before any deploy of new code. -build the array "on the fly" and cache it. This seems to be the best solution, as it allows for any class names and directory structure you want, and is fully flexible in that new files just get added to the list. The concerns are: where to store it and what about deleted/moved files. For storage we chose APC, as it doesn't have the disk I/O overhead. With regards to file deletes, it doesn't matter, as you probably don't wanna require them anywhere anyway. As to moves... that's unresolved (we ignore it as historically it didn't happen very often for us). Any other solutions?

    Read the article

  • Which way is preferred when doing asynchronous WCF calls?

    - by Mikael Svenson
    When invoking a WCF service asynchronous there seems to be two ways it can be done. 1. public void One() { WcfClient client = new WcfClient(); client.BegindoSearch("input", ResultOne, null); } private void ResultOne(IAsyncResult ar) { WcfClient client = new WcfClient(); string data = client.EnddoSearch(ar); } 2. public void Two() { WcfClient client = new WcfClient(); client.doSearchCompleted += TwoCompleted; client.doSearchAsync("input"); } void TwoCompleted(object sender, doSearchCompletedEventArgs e) { string data = e.Result; } And with the new Task<T> class we have an easy third way by wrapping the synchronous operation in a task. 3. public void Three() { WcfClient client = new WcfClient(); var task = Task<string>.Factory.StartNew(() => client.doSearch("input")); string data = task.Result; } They all give you the ability to execute other code while you wait for the result, but I think Task<T> gives better control on what you execute before or after the result is retrieved. Are there any advantages or disadvantages to using one over the other? Or scenarios where one way of doing it is more preferable?

    Read the article

  • What is PocoCapsule current status?

    - by seas
    What is PocoCapsule current status? Is it evolving? Has it been forked with some other product? What is about the whole idea of IoC for C++? If PocoCapsule is not evolving, is it because IoC was considered not useful for C++, unsafe, other patterns appeared or something else? As far as I understand there are 2-3, maybe few more products, that implement IoC for C++, available and PocoCapsule is the most mature of them. I see several disadvantages in current version (as I see it's 1.1 from google code): No separate namespace. Header files are required to be right in INCLUDE folder - better to place them in subfolder. Generation Tools depend on Java. No static linking libraries are built by default. Cannot generate source code out of setup.xml for compilation and link with my app if I don't need reconfiguration feature. Does anybody have the same thoughts? Does anybody work on something of this list? Are there any barriers to start working, like patents?

    Read the article

  • Is there a quality, file-size, or other benefit to JPEG sizes being multiples of 8px or 16px?

    - by davebug
    The JPEG compression encoding process splits a given image into blocks of 8x8 pixels, working with these blocks in future lossy and lossless compressions. [source] It is also mentioned that if the image is a multiple 1MCU block (defined as a Minimum Coded Unit, 'usually 16 pixels in both directions') that lossless alterations to a JPEG can be performed. [source] I am working with product images and would like to know both if, and how much benefit can be derived from using multiples of 16 in my final image size (say, using an image with size 480px by 360px) vs. a non-multiple of 16 (such as 484x362). In this example I am not interested in further alterations, editing, or recompression of the final image. To try to get closer to a specific answer where I know there must be largely generalities: Given a 480x360 image that is 64k and saved at maximum quality in Photoshop [example]: Can I expect any quality loss from an image that is 484x362 What amount of file size addition can I expect (for this example, the additional space would be white pixels) Are there any other disadvantages to growing larger than the 8px grid? I know it's arbitrary to use that specific example, but it would still be helpful (for me and potentially any others pondering an image size) to understand what level of compromise I'd be dealing with in breaking the non-8px grid. The key issue here is a debate I've had is whether 8-pixel divisible images are higher quality than images that are not divisible by 8-pixels.

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27  | Next Page >