Search Results

Search found 24201 results on 969 pages for 'andrew case'.

Page 174/969 | < Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >

  • Some emails are being delivered, some returned

    - by Tom Broucke
    I have my own VPS where a site is running (control panel: directadmin). When I send mails, some are being delivered (hotmail, gmail, [email protected] ,...), others are not ([email protected]), others are delivered after being greylisted ([email protected]). /var/log/exim/mainlog What could be the cause of this? Is the problem Sender-Side or Receiver-Side? case 1: [email protected] (delivered) 2012-06-20 15:02:03 1ShKXr-0005Sc-7g <= [email protected] U=apache P=local S=1319 T="Password reset" from <[email protected]> for [email protected] 2012-06-20 15:02:03 1ShKXr-0005Sc-7g gmail-smtp-in-v4v6.l.google.com [2a00:1450:8005::1b] Network is unreachable 2012-06-20 15:02:03 1ShKXr-0005Sc-7g => [email protected] F=<[email protected]> R=lookuphost T=remote_smtp S=1355 H=gmail-smtp-in-v4v6.l.google.com [173.194.67.27] X=TLSv1:RC4-SHA:128 C="250 2.0.0 OK 1340196103 cp4si34336466wib.14" 2012-06-20 15:02:03 1ShKXr-0005Sc-7g Completed case 2: [email protected] (not being delivered) 2012-06-21 09:57:14 1ShcGQ-0007No-5H <= [email protected] H=localhost ([91.230.245.141]) [127.0.0.1] P=esmtpa A=login:[email protected] S=740 [email protected] T="hey" from <[email protected]> for [email protected] 2012-06-21 09:57:14 1ShcGQ-0007No-5H ** [email protected] F=<[email protected]> R=virtual_aliases: 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z <= <> R=1ShcGQ-0007No-5H U=mail P=local S=1546 T="Mail delivery failed: returning message to sender" from <> for [email protected] 2012-06-21 09:57:14 1ShcGQ-0007No-5H Completed 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z => info <[email protected]> F=<> R=virtual_user T=virtual_localdelivery S=1643 2012-06-21 09:57:14 1ShcGQ-0007Nt-7Z Completed case 3: [email protected] (greylisted) 2012-06-21 15:29:02 1ShhRW-000862-BV <= [email protected] H=localhost ([91.230.245.141]) [127.0.0.1] P=esmtpa A=login:[email protected] S=782 [email protected] T="testmail squirrel" from <[email protected]> for [email protected] 2012-06-21 15:29:02 1ShhRW-000862-BV SMTP error from remote mail server after RCPT TO:<[email protected]>: host mx-cluster-b1.one.com [195.47.247.194]: 450 4.7.1 <[email protected]>: Recipient address rejected: Greylisted for 5 minutes 2012-06-21 15:29:02 1ShhRW-000862-BV == [email protected] R=lookuphost T=remote_smtp defer (-44): SMTP error from remote mail server after RCPT TO:<[email protected]>: host mx-cluster-b2.one.com [195.47.247.195]: 450 4.7.1 <[email protected]>: Recipient address rejected: Greylisted for 5 minutes Notice that the "from" in case1 differs in case2: [email protected] or [email protected]. Thanks for your time!

    Read the article

  • mount issue in ubuntu 12.10

    - by Vipin Ms
    I'm having issue with latest Ubuntu 12.10. Let me make it more clear. I'm having the following partitions in my Laptop. Device Boot Start End Blocks Id System /dev/sda1 2048 39997439 19997696 83 Linux /dev/sda2 * 40001850 81947564 20972857+ 83 Linux /dev/sda3 81947565 123877214 20964825 83 Linux /dev/sda4 123887614 976773119 426442753 5 Extended /dev/sda5 123887616 333602815 104857600 83 Linux /dev/sda6 333604864 543320063 104857600 83 Linux /dev/sda7 543322112 753037311 104857600 83 Linux /dev/sda8 753039360 976773119 111866880 83 Linux I have also two users named "ms" and abc. Here ms is for administrative tasks and abc for my friends. When I mount any drive under "abc" user, I cannot access it under my other user "ms". Same as in the case with "ms" user. I found possible reason behind the issue. When I mount any drive under "abc" user, Ubuntu will try to mount it under "/media/abc/volume_name" instead of "/media/volume_name" . Same as in the case with "ms" user. # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 19G 11G 7.5G 59% / udev 1.5G 4.0K 1.5G 1% /dev tmpfs 599M 896K 598M 1% /run none 5.0M 0 5.0M 0% /run/lock none 1.5G 620K 1.5G 1% /run/shm none 100M 92K 100M 1% /run/user /dev/sda2 20G 172M 19G 1% /media/abc/TEST /dev/sdb1 466G 353G 114G 76% /media/abc/F088F74288F7063E /dev/sdb2 466G 318G 148G 69% /media/abc/New Volume /dev/sda5 99G 94G 323M 100% /media/abc/Songs /dev/sda6 99G 31G 63G 34% /media/ms/Films Here, you can see that "TEST" was mounted under "/media/abc/TEST". When I try to access the already mounted partition named '/media/abc/TEST" in my "ms" session I'm getting the following error. How to fix this error? Is it a bug? Is there any way to fix this without modifying the underlying file-system structure?

    Read the article

  • Query something and return the reason if nothing has been found

    - by Daniel Hilgarth
    Assume I have a Query - as in CQS that is supposed to return a single value. Let's assume that the case that no value is found is not exceptional, so no exception will be thrown in this case. Instead, null is returned. However, if no value has been found, I need to act according to the reason why no value has been found. Assuming that the Query knows the reason, how would I communicate it to the caller of the Query? A simple solution would be not return the value directly but a container object that contains the value and the reason: public class QueryResult { public TValue Value { get; private set; } public TReason ReasonForNoValue { get; private set; } } But that feels clumsy, because if a value is found, ReasonForNoValue makes no sense and if no value has been found, Value makes no sense. What other options do I have to communicate the reason? What do you think of one event per reason? For reference: This is going to be implemented in C#.

    Read the article

  • How to get feedback from the community on large chunks of code?

    - by MainMa
    Code Review.SE is great when you need feedback on a precise, short piece of code. But where to get similar feedback about the code itself when: you have thousands of LOC, don't have colleagues in your workplace ready or willing to review the code¹, don't have thousands of dollars to spend for a professional review by a third party developer?² Places like CodePlex are a good idea to get your project known³, but from what I've seen, the feedback you get on known projects are consumer feedback, i.e. concerns the bugs and feature requests, not the quality of the source code itself. What are the social way to get the community involved in the code review of the codebase of a certain size for an open source project which doesn't have the scale of Firefox or similar products? ¹ Which is the case for most personal and open source projects, or projects done in companies where the practice of regular and complete code review is nonexistent. ² Which is, again, the case for most personal and open source projects. ³ Even if too many projects published on CodePlex never get known, either because nobody cares or because they are presented not very well.

    Read the article

  • How should I use my new SSD drive?

    - by jasondavis
    I just built a new PC the other day. Specs... Processor: Intel i7-930 quad core CPU CPU Cooler: COOLER MASTER Hyper 212 Motherboard: AsRock X58 Extreme 3 RAM/Memory: 6gb G-Skill tripple channel DDR3 memory (3 sticks of 2gb planning to get another kit to make it 12gb total soon) Operating System Hard drive: Intel X25-M 80GB Mainstream SATA2 Solid State Drive Video Cards: 2 XFX ATI Redeon HD 4650 cards to run 3-4 monitors Case: Lian Li PC-B10 Midtower case Power Supply: Antec TruePower New TP-750 Blue 750W Operating System Windows 7 Pro 64bit Not sure if the specs are helpful at all but I posted them just in case. So I got everything put together and running great so far but I need some advice/ideas/help/tips. I got the SSD drive in hopes of using it strictly for my windows 7 install along with all my other programs I install. I am then going to get another drive or 2 just for data (video,music,photos, etc). So my plan is to just install the new data drives and then in windows 7 I will change my "My documents" "My Music" "My Video" "MY Photos" library's to be located on the data drives instead of the OS SSD drive. I would ultimately like to install all my programs with my windows install on the SSD drive and then create an IMAGE of the drive and then 6 months down the road if things are sluggish I can just wipe the drive and restore my IMAGE with all my programs and settings in tact still. So here are some questions now. 1) How can I verify that TRIM is working on my new SSD? 2) Is there anything above that I missed that I should be doing? I think I once read that there is a page file or some sort of file that windows changes a lot and that it should be moved off f an SSD an onto my data drives. DOes anyone know what I might of heard? If you do can you explain the pros and cons of doing such a thing as well as how to possibly? 3) Any tips or advice to get the best performance from all this, I built a pretty nice system and I just want to make it stay that way as long as I can.

    Read the article

  • Is implementing an interface defined in a subpackage an anti-pattern?

    - by Michael Kjörling
    Let's say I have the following: package me.my.pkg; public interface Something { /* ... couple of methods go here ... */ } and: package me.my; import me.my.pkg.Something; public class SomeClass implements Something { /* ... implementation of Something goes here ... */ /* ... some more method implementations go here too ... */ } That is, the class implementing an interface lives closer to the package hierarchy root than does the interface it implements but they both belong in the same package hierarchy. The reason for this in the particular case I have in mind is that there is a previously-existing package that groups functionality which the Something interface logically belongs to, and the logical (as in both "the one you'd expect" and "the one where it needs to go given the current architecture") implementation class exists previously and lives one level "up" from the logical placement of the interface. The implementing class does not logically belong anywhere under me.my.pkg. In my particular case, the class in question implements several interfaces, but that feels like it doesn't make any (or at least no significant) difference here. I can't decide if this is an acceptable pattern or not. Is it or is it not, and why?

    Read the article

  • UNC vs. SFTP vs. SSH for uploading to a Windows server

    - by apollodude217
    I understand that UNC, SFTP, and SSH are, of course, different interfaces (protocols?). But feature-wise, how do they differ? Are there things you can do with one that you cannot do with another? Is one more secure than another? The situation I want to fix is one where we have several Windows servers and VPC's, some of which have SFTP servers and some of which don't. For those that don't we use UNC over a VPN shared by the entire enterprise. What I want to do is either use all UNC, all SFTP, or all SSH (unless a real need to vary on a case-by-case basis presents itself). Links would be excellent. My biggest problem here is that my googling brings up irrelevant results. :(

    Read the article

  • Leveraging NuGet as a central repository for PowerShell modules

    - by cibrax
    We have been working a lot lately with PowerShell as part of our star product at Tellago Studios, “Moesion”. One of the main features we provide in Moesion is the ability to execute PowerShell commands remotely in a given server using a web mobile interface (You can read more in my previous post about Moesion). One of the things we realized in all this time is that PowerShell lacks of a central repository where IT guys or we, the developers, can easily grab and reuse commands.  All the commands or modules are basically spread across multiple places or websites, like personal blogs, TechNet or CodePlex projects to name a few making the search of them very hard. You are usually limited to use your favorite search engine and copy what you find. In addition, there is not an easy way to reuse, extend or version these commands, which also limits any contribution that you could make to the community.  My friend Jose wrote a great post the other day about the importance of reusing PowerShell modules, and what is the mechanism to reuse them. Jose, however, based his post in a custom implementation using a GIT repository for storing the modules. We have NuGet in the .NET platform for sharing and reusing existing libraries or code, so why can’t just leverage it for reusing PowerShell modules as well ?. Some teams in Microsoft are using NuGet for distributing libraries and binaries so it would be a great thing for all of us if they also distribute the scripting interfaces in PowerShell using NuGet. This applies to the .NET OS community as well. In fact, it looks like Andrew Nurse had the same idea and implemented a project for this in BitBucket, PsGet.

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • HTTP resource caching / fetching

    - by Bobby Jack
    I'm trying to optimise a page, and I'm seeing some strange behaviour. Each time I click on a link to the page, all resources are fetched from the server, responding with 200s. However, when I refresh the page (specifically, F5 in Firefox), all resources return a 304 and - of course - the page loads much faster as a result. The main page returns a 200 in both cases. In the refresh case, If-Modified-Since headers are sent with the requests to the resources. However, in the 'clicking a link' case, they are not. What's the reason for that, and can I control it?

    Read the article

  • Funnelling http traffic

    - by spencer p
    I have a situation where a large batch of servers (X), on demand, need to request data from a smaller set of web servers (Y). The worst case scenario is if all servers in X decide to fetch different requests to one server in Y. That would be X amount of connections, which could be a very large burst of traffic. The best case scenario is if 1 server in X hit 1 server in Y in tandem. Life does not work like this. One idea to entertain is placing a proxy, similar to squid between X and Y. All of X servers can connect to this proxy, but would result in a few persistent (http keepalive) connections to Y. If The few were say, 3 or 4, then it would funnel. If we could then rate limit those connections and traffic decides to spike unusually high, we wouldn't hurt anyone but ourselves. Thoughts?

    Read the article

  • What are some practical uses of the "new" modifier in C# with respect to hiding?

    - by Joel Etherton
    A co-worker and I were looking at the behavior of the new keyword in C# as it applies to the concept of hiding. From the documentation: Use the new modifier to explicitly hide a member inherited from a base class. To hide an inherited member, declare it in the derived class using the same name, and modify it with the new modifier. We've read the documentation, and we understand what it basically does and how it does it. What we couldn't really get a handle on is why you would need to do it in the first place. The modifier has been there since 2003, and we've both been working with .Net for longer than that and it's never come up. When would this behavior be necessary in a practical sense (e.g.: as applied to a business case)? Is this a feature that has outlived its usefulness or is what it does simply uncommon enough in what we do (specifically we do web forms and MVC applications and some small factor WinForms and WPF)? In trying this keyword out and playing with it we found some behaviors that it allows that seem a little hazardous if misused. This sounds a little open-ended, but we're looking for a specific use case that can be applied to a business application that finds this particular tool useful.

    Read the article

  • How do I clip an image in OpenGL ES on Android?

    - by Maxim Shoustin
    My game involves "wiping off" an image by touch: After moving a finger over it, it looks like this: At the moment, I'm implementing it with Canvas, like this: 9Paint pTouch; 9int X = 100; 9int Y = 100; 9Bitmap overlay; 9Canvas c2; 9Rect dest; pTouch = new Paint(Paint.ANTI_ALIAS_FLAG); pTouch.setXfermode(new PorterDuffXfermode(Mode.SRC_OUT)); pTouch.setColor(Color.TRANSPARENT); pTouch.setMaskFilter(new BlurMaskFilter(15, Blur.NORMAL)); overlay = BitmapFactory.decodeResource(getResources(),R.drawable.wraith_spell).copy(Config.ARGB_8888, true); c2 = new Canvas(overlay); dest = new Rect(0, 0, getWidth(), getHeight()); Paint paint = new Paint();9 paint.setFilterBitmap(true); ... @Override protected void onDraw(Canvas canvas) { ... c2.drawCircle(X, Y, 80, pTouch); canvas.drawBitmap(overlay, 0, 0, null); ... } @Override 9public boolean onTouchEvent(MotionEvent event) { switch (event.getAction()) { case MotionEvent.ACTION_DOWN: case MotionEvent.ACTION_MOVE: { X = (int) event.getX(); Y = (int) event.getY();9 invalidate(); c2.drawCircle(X, Y, 80, pTouch);9 break; } } return true; ... What I'm essentially doing is drawing transparency onto the canvas, over the red ball image. Canvas and Bitmap feel old... Surely there is a way to do something similar with OpenGL ES. What is it called? How do I use it? [EDIT] I found that if I draw an image and above new image with alpha 0, it goes to be transparent, maybe that direction? Something like: gl.glColor4f(0.0f, 0.0f, 0.0f, 0.01f);

    Read the article

  • Resilient Linux Mail Server Setup

    - by Coops
    How would people design a resilient mail server setup with Linux? On an application level what the system needs to provide is both an incoming and outgoing mail service (i.e. SMTP & IMAP), along with filtering and archive storage (the archive part isn't critical yet, so we'll look at this later probably). What is required on top of this is a resilient system, i.e. one which will handle individual server failures without interrupting service. As such I would term this a High Availability mail system. This is in contrast to a High Performance mail setup, as in our case the volume of mail being handled isn't the important factor, it's simply that it stays online. Having not approached this problem before, the first thing I thought of was a clustered file system (gfs/gluster/etc), combined with heartbeat to failover a floating IP to another box in the case of a server failure. Combined with postfix & dovecot does this sound feasible to people?

    Read the article

  • Migration a database from 32bit to 64bit

    - by Mike Dietrich
    Database migrations from an 32bit environment to an 64bit environment keeping the same platform architecture (e.g. moving an Oracle 10.2.0.5 database from MS Windows XP 32bit to MS Windows Server 2003 64bit) does not happen that often anymore. But still we see them getting done. And there are a few things to note when doing such a move. First of all the important question is:Will you upgrade your database as part of this move - Yes or No? If you say "Yes" then you are almost done with that topic as we will take care of that bitnes move during the upgrade. The only thing you have to take care is OLAP in case you are using OLAP Option with Analytic Workspaces (AW) by yourself. Those store data in Binary LOBs - and in order to move AWs from 32bit to 64bit you have to export your AWs prior to the move - and import them later on. People who don't use OLAP don't have to take care on this. But if you say "No" (meaning: no upgrade actions involved - you keep your database version) then you have to make sure to invalidate all packages and stored code in the database before you shutdown your database in the 32bit environment and prior to moving it over. And the same rule as above for OLAP applies once you use the OLAP Option. In the source environment: startup upgrade;    -- [or startup migrate; -- for Oracle 9i] @?/rdbms/admin/utlirp.sqlshutdown immediate In the destination environment: startup upgrade @?/olap/admin/xumuts.plb --Only if OLAP Option is installed@?/rdbms/admin/utlrp.sql The script utlirp.sql will invalidate all packages and stored code, utlrp.sql will recompile - and xumuts.plb will rebuild the OLAP Analytic Workspaces in case you have the OLAP Option installed.

    Read the article

  • Is error suppression acceptable in role of logic mechanism?

    - by Rarst
    This came up in code review at work in context of PHP and @ operator. However I want to try keep this in more generic form, since few question about it I found on SO got bogged down in technical specifics. Accessing array field, which is not set, results in error message and is commonly handled by following logic (pseudo code): if field value is set output field value Code in question was doing it like: start ignoring errors output field value stop ignoring errors The reasoning for latter was that it's more compact and readable code in this specific case. I feel that those benefits do not justify misuse (IMO) of language mechanics. Is such code is being "clever" in a bad way? Is discarding possible error (for any reason) acceptable practice over explicitly handling it (even if that leads to more extensive and/or intensive code)? Is it acceptable for programming operators to cross boundaries of intended use (like in this case using error handling for controlling output)? Edit I wanted to keep it more generic, but specific code being discussed was like this: if ( isset($array['field']) ) { echo '<li>' . $array['field'] . '</li>'; } vs the following example: echo '<li>' . @$array['field'] . '</li>';

    Read the article

  • IRC Services with failover support?

    - by insertjokehere
    I run a single server (call it 'server A') IRC 'network', and thank to the generosity of some friends, I have been given a second server ('server B') that I can run an IRCd on in order to provide redundancy in case server A crashes. This is fine, I can set up a round-robin DNS with the servers linked. The problem I have is what to do about services? Does anyone know of a way to get the services to 'fail over' in case of a server failure? Eg, Server A starts off running the services, but suddenly crashes. Server B detects this and starts its own copy of the services (ideally with the same configuration and data as the services on Server B) One solution that comes it mind is to write a bot that each server runs, that sit in a channel periodically checking if the bot from the other server is in the channel. If it is, then all is well. If not, then failover. I would prefer not to have to code this myself though We are currently using Unreal IRCd and Anope services on Linux

    Read the article

  • Tomcat / Railo stop responding with no error output

    - by andrewdixon
    This is going to sound very vague and I'm sure it will be voted down for not giving enough information however I don't really have any to give as you will see. We have an AWS instance running Amazon Linux, Apache, Tomcat and Railo and from time to time the Tomcat/Railo simply stops responding to requests and there are no errors output in the catalina.out file or any of the other log files in the Tomcat logs directory. When I issue the command to restart Tomcat/Railo the restart scripts sits there for a while then says that Tomcat has not responded so it has killed it off and then it starts up again and everything is fine until it happens again, anything from a couple of minutes to a couple of days later. I have done my best to check other logs on the server but have found no messages at all to indicate why Tomcat/Railo has given up and stopped responding. Can anyone suggest any reason why it might be doing this and / or any other log file(s) that we could check to see what is happening. Thanks. Andrew.

    Read the article

  • Architecture of a "website generator" web application

    - by Resorath
    What is the most maintainable and efficient way to architect a web application who's purpose is to host and generate websites which can be customized to a certain degree? There are a lot of these style of applications in the wild that generate all kinds of sites, from sites that host World of Warcraft guilds like guildlaunch to other sites like my wedding for wedding site hosting. My question is, what is the basic architecture that these sites operate on? I imagine there are two ways of thinking about this. A central set of code that all sites on the host run against, and it acts differently based on which site was visited. In this manner, when the base code is updated all sites are updated simultaneously. Or, the code for an individual site exists in a silo, and is simply replicated to a new directory each time a site is created. When an update needs to be applied, the code is pushed out to each site silo. In my case, I am working in PHP with the CodeIgniter framework, however the answer need not be limited to this case. Which method (if any) creates a more maintainable and efficient architecture to manage this style of web application?

    Read the article

  • Acer Aspire 5542G overheating with ubuntu/kubuntu 12.04

    - by james
    I have an Acer Aspire 5542G laptop purchased couple of years ago. All these days, i used windows 7 on it . Then I tried ubuntu 12.04 . Everything was fine except the overheating issue. I updated ubuntu with all security fixes and available updates but nothing solved the problem. With idle use like internet browsing, the cpu fan speeds up a lot and i can feel very hot air coming from the vent (comparable to playing serious 3d game in windows). But it will not go to a point of freeze and shutdown. But as long as im using it, with no intensive tasks at all, the laptop stays too hot. This wasn't the case with windows7. In windows 7 the fan will not rotate at all with normal use. I heard there was manufacturing defect with some acer laptops, but i think it wasn't the case with my laptop since windows7 runs perfectly. I updated the bios to latest version. I cleaned dust in the vents. I tried kubuntu 12.04 up-to-date. Nothing solved the issue. My laptop specs are: CPU : AMD turion2 x2 M500 @ 2.2GHz GPU : AMD Mobility Radeon HD4570 3GB RAM and 320GB hard disk.

    Read the article

  • Entiity System with C++

    - by Dono
    I'm working on a game engine using the Entity System and I have some questions. How i see Entity System : Components : A class with attributs, set and get. Sprite Physicbody SpaceShip ... System : A class with a list of components. (Component logic) EntityManager Renderer Input Camera ... Entity : Just a empty class with a list of components. What i've done : Currently, i've got a program who allow me to do that : // Create a new entity/ Entity* entity = game.createEntity(); // Add some components. entity->addComponent( new TransformableComponent() ) ->setPosition( 15, 50 ) ->setRotation( 90 ) ->addComponent( new PhysicComponent() ) ->setMass( 70 ) ->addComponent( new SpriteComponent() ) ->setTexture( "name.png" ) ->addToSystem( new RendererSystem() ); My questions Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ? Did the system stock a list of components or a list of entities ? In the case where I stock a list of entities, I need to get the component of this entities on each frame, that's probably heavy isn't it ?

    Read the article

  • Regular Expression to replace part of URL in XML file

    - by Richie086
    I need a regular expression in Notepad++ to search/replace a string. My document (xml) has serveral thousand lines that look similar to this: <Url Source="Output/username/project/Content/Volume1VolumeName/TopicFileName.htm" /> I need to replace everything starting from Volume1 to .htm" / to replaced with X's or some other character to mask the actual file names in this file. So the resulting string would look like this after the search/replace was performed: <Url Source="Output/username/project/Content/Volume1XxxxxxXxxx/XxxxxXxxxXxxx.htm" /> I am working with confidential information that I cannot release to people outside of my company, but i need to send an example log file to a 3rd party for troubleshooting purposes. FYI the X's do not need to follow the upper/lower case after the replacement, i was just using different case X's for the hell of it :)

    Read the article

  • Hosting and scaling a Facebook application in the cloud? [migrated]

    - by DhruvPathak
    We would be building a Facebook application in Django (Python), but still not sure of where to host it economically, and with a good provision to scale in case the app gets viral. Some details about the app: Would be HTML based like a website,using django as a framework. 100K is the number of expected pageviews in a day, if the app is viral. The users will not generate any media content, only some database data will be generated by them. It would be great if someone with more experience can guide on following points: A) Hosting on Google app engine or Amazon EC2 or some other cloud like RackSpace : Preferable points found in AppEngine were ease of deployment, cost effectiveness and easy scaling. For EC2: Full hold of the virtual machine,Amazon NoSQL and RDMBS database services in case we decide to use them. B) Does backend technology affect monthly cost? eg. would CPU and memory usage difference of Django over , for example , PHP framework like CodeIgnitor really make remarkable difference in running costs. (Here is the article that triggered this thought process : http://journal.dedasys.com/2010/01/12/rough-estimates-of-the-dollar-cost-of-scaling-web-platforms-part-i#comments) C) Does something like Heroku , which provides additional services over Amazon EC2, prove to be better than raw cloud management? It is not that we are trying for premature scaling, we just want to have a good start so that we are ready to handle unpredicted growth and scale.

    Read the article

  • Should I use a config file or database for storing business rules?

    - by foiseworth
    I have recently been reading The Pragmatic Programmer which states that: Details mess up our pristine code—especially if they change frequently. Every time we have to go in and change the code to accommodate some change in business logic, or in the law, or in management's personal tastes of the day, we run the risk of breaking the system—of introducing a new bug. Hunt, Andrew; Thomas, David (1999-10-20). The Pragmatic Programmer: From Journeyman to Master (Kindle Locations 2651-2653). Pearson Education (USA). Kindle Edition. I am currently programming a web app that has some models that have properties that can only be from a set of values, e.g. (not actual example as the web app data confidential): light-type = sphere / cube / cylinder The light type can only be the above three values but according to TPP I should always code as if they could change and place their values in a config file. As there are several incidents of this throughout the app, my question is: Should I store possibly values like these in: a config file: 'light-types' = array(sphere, cube, cylinder), 'other-type' = value, 'etc = etc-value a single table in a database with one line for each config item a database with a table for each config item (e.g. table: light_types; columns: id, name) some other way? Many thanks for any assistance / expertise offered.

    Read the article

  • When is it ever ok to write your own development tools? (editor into IDE)

    - by mario
    So I'm foremost using a text editor for coding. It's a very bare bones editor; provides mostly just syntax highlighting. But on rare occasions I also need to debug something. And that's when I have to resort to an IDE (mostly Netbeans, but got fiddly Eclipse/Aptana working as second fallback). For general use however IDEs feel not workable to me. It's a visual thing, being used to console UIs etc. And switching back and forth between a text editor and an IDE is slightly cumbersome too. That's why I'm considering extending the editor, not really into a full-fledged IDE - but at the very least integrate a debug feature. Since I'm working on PHP, it seems not that much effort. The DBGp allows to externalize a debug handler from the editor, so it's just minor integration work and figuring out how to shoehorn a breakpoint feature into the editor (joe btw). And while I've also got time to do that, I'm wondering if this is really worthwhile. In this case it's not a needed development tool. It's just for convenience. And the cause for doing it is basically just not liking the existing solution. While over time I might extend and adapt this debugger thing, it initially will be as circumstantial as Eclipse. It inevitably starts out as poor development tool. Furthermore there is likely not much reuse. (Okay, this is not an important point. Most such software exists sans much of a use case. And also obviously, similar extensions already exist for emacs and vim, so it cannot be completely pointless.) But what's a general guideline on attempting to conoct custom development tools, particularily if they are not really needed but satisfy personal preferences? (Usability enhancement not certain.)

    Read the article

< Previous Page | 170 171 172 173 174 175 176 177 178 179 180 181  | Next Page >