Search Results

Search found 20883 results on 836 pages for 'wont say'.

Page 616/836 | < Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >

  • More elegant way to avoid hard coding the format of a a CSV file?

    - by dsollen
    I know this is trivial issue, but I just feel this can be more elegant. So I need to write/read data files for my program, lets say they are CSV for now. I can implement the format as I see fit, but I may have need to change that format later. The simply thing to do is something like out.write(For.getValue()+","+bar.getMinValue()+","+fi.toString()); This is easy to write, but obviously is guilty of hard coding and the general 'magic number' issue. The format is hard-coded, requires parsing of the code to figure out the file format, and changing the format requires changing multiple methods. I could instead have my constants specifying the location that I want each variable to be saved in the CSV file to remove some of the 'magic numbers'; then save/load into the an array at the location specified by the constants: int FOO_LOCATION=0; int BAR_MIN_VAL_LOCATION=1; int FI_LOCATION=2 int NUM_ARGUMENTS=3; String[] outputArguments=new String[NUM_ARGUMENTS]; outputArguments[FOO_LOCATION] = foo.getValue(); outputArgumetns[BAR_MIN_VAL_LOCATION] = bar.getMinValue(); outptArguments[FI_LOCATOIN==fi.toString(); writeAsCSV(outputArguments); But this is...extremely verbose and still a bit ugly. It makes it easy to see the format of existing CSV and to swap the location of variables within the file easily. However, if I decide to add an extra value to the csv I need to not only add a new constant, but also modify the read and write methods to add the logic that actually saves/reads the argument from the array; I still have to hunt down every method using these variables and change them by hand! If I use Java enums I can clean this up slightly, but the real issue is still present. Short of some sort of functional programming (and java's inner classes are too ugly to be considered functional) I still have no obvious way of clearly expressing what variable is associated with each constant short of writing (and maintaining) it in the read/write methods. For instance I still need to write somewhere that the FOO_LOCATION specifies the location of foo.getValue(). It seems as if there should be a prettier, easier to maintain, manner for approaching this? Incidentally, I'm working in java at the moment, however, I am interested conceptually about the design approach regardless of language. Some library in java that does all the work for me is definitely welcome (though it may prove more hassle to get permission to add it to the codebase then to just write something by hand quickly), but what I'm really asking is more about how to write elegant code if you had to do this by hand.

    Read the article

  • Keep coding the wrong way to remain consistent? [closed]

    - by bwalk2895
    Possible Duplicate: Code maintenance: keeping a bad pattern when extending new code for being consistent, or not? To keep things simple let's say I am responsible for maintaining two applications, AwesomeApp and BadApp (I am responsible for more and no that is not their actual names). AwesomeApp is a greenfield project I have been working on with other members on my team. It was coded using all the fancy buzzwords, Multilayer, SOA, SOLID, TDD, and so on. It represents the direction we want to go as a team. BadApp is a application that has been around for a long time. The architecture suffers from many sins, namely everything is tightly coupled together and it is not uncommon to get a circular dependency error from the compiler, it is almost impossible to unit test, large classes, duplicate code, and so on. We have a plan to rewrite the application following the standards established by AwesomeApp, but that won't happen for a while. I have to go into BadApp and fix a bug, but after spending months coding what I consider correctly, I really don't want do continue perpetuate bad coding practices. However, the way AwesomeApp is coded is vastly different from the way BadApp is coded. I fear implementing the "correct" way would cause confusion for other developers who have to maintain the application. Question: Is it better to keep coding the wrong way to remain consistent with the rest of the code in the application (knowing it will be replaced) or is it better to code the right way with an understanding it could cause confusion because it is so much different? To give you an example. There is a large class (1000+ lines) with several functions. One of the functions is to calculate a date based on an enumerated value. Currently the function handles all the various calculations. The function relies on no other functionality within the class. It is self contained. I want to break the function into smaller functions (at the very least) and put them into their own classes and hide those classes behind an interface (at the most) and use the factory pattern to instantiate the date classes. If I just broke it out into smaller functions within the class it would follow the existing coding standard. The extra steps are to start following some of the SOLID principles.

    Read the article

  • How far should an entity take care of its properties values by itself?

    - by Kharlos Dominguez
    Let's consider the following example of a class, which is an entity that I'm using through Entity Framework. - InvoiceHeader - BilledAmount (property, decimal) - PaidAmount (property, decimal) - Balance (property, decimal) I'm trying to find the best approach to keep Balance updated, based on the values of the two other properties (BilledAmount and PaidAmount). I'm torn between two practices here: Updating the balance amount every time BilledAmount and PaidAmount are updated (through their setters) Having a UpdateBalance() method that the callers would run on the object when appropriate. I am aware that I can just calculate the Balance in its getter. However, it isn't really possible because this is an entity field that needs to be saved back to the database, where it has an actual column, and where the calculated amount should be persisted to. My other worry about the automatically updating approach is that the calculated values might be a little bit different from what was originally saved to the database, due to rounding values (an older version of the software, was using floats, but now decimals). So, loading, let's say 2000 entities from the database could change their status and make the ORM believe that they have changed and be persisted back to the database the next time the SaveChanges() method is called on the context. It would trigger a mass of updates that I am not really interested in, or could cause problems, if the calculation methods changed (the entities fetched would lose their old values to be replaced by freshly recalculated ones, simply by being loaded). Then, let's take the example even further. Each invoice has some related invoice details, which also have BilledAmount, PaidAmount and Balance (I'm simplifying my actual business case for the sake of the example, so let's assume the customer can pay each item of the invoice separately rather than as a whole). If we consider the entity should take care of itself, any change of the child details should cause the Invoice totals to change as well. In a fully automated approach, a simple implementation would be looping through each detail of the invoice to recalculate the header totals, every time one the property changes. It probably would be fine for just a record, but if a lot of entities were fetched at once, it could create a significant overhead, as it would perform this process every time a new invoice detail record is fetched. Possibly worse, if the details are not already loaded, it could cause the ORM to lazy-load them, just to recalculate the balances. So far, I went with the Update() method-way, mainly for the reasons I explained above, but I wonder if it was right. I'm noticing I have to keep calling these methods quite often and at different places in my code and it is potential source of bugs. It also has a detrimental effect on data-binding because when the properties of the detail or header changes, the other properties are left out of date and the method has no way to be called. What is the recommended approach in this case?

    Read the article

  • Enterprise with eyes on NoSQL

    - by thegreeneman
    Since joining Oracle a few months back, I have had the fortune of being able to interact with a number of large enterprise organizations and discuss their current state of adoption for NoSQL database technology.   It is worth noting that a large percentage of these organizations do have some NoSQL use and have been steadily increasing their understanding of its applicability for certain data management workloads.   Thru those discussions I’ve learned that it seems one of the biggest issues confronting enterprise adoption of NoSQL databases is the lack of standards for access, administration and monitoring.    This was not so much of an issue with the early adopters of NoSQL technology because they employed a highly DevOps centric approach to application deployment leaving a select few highly qualified developers with the task of managing the production of the system that they designed and implemented. However, as NoSQL technology moves out of the startup and into the hands of larger corporate entities, developers with a broad skill set that are capable of both development and I.T. type production management are in short supply and quickly get moved on to do new projects, often moving to different roles within the company.  This difference in the way smaller more agile startups operate as compared to more established companies is revealing a gap in the NoSQL technology segment that needs to get addressed.    This is one of places that a company such as Oracle has a leg up in the NoSQL Database front.  A combination of having gone thru a past database maturization process,  combined with a vast set of corporate relationships that have grown hand in hand to solve these types of issues, Oracle is in a great place to lead the way in closing the requirements gap for NoSQL technology.  Oracle's understanding of the needs specific to mature organizations have already made their way into the Oracle’s NoSQL Database offering with features such as:  One click cluster deployment with visual topology planning,  standards based monitoring protocols such as SNMP, support for data access for reporting via standard SQL  and integration with emerging standards for data access such as MapReduce.  Given the exciting developments we’re driving in the Oracle NoSQL Database group, I will have a lot more to say about this topic as we move into the second half of the year.

    Read the article

  • wlan0 (WPA2) doesn't work when configured manually

    - by 71GA
    I have been trying to reconfigure my eth0 and wlan0 interfaces by editing /etc/network/interfaces file as folows: auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.11 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 dns-nameservers 193.2.1.66 auto wlan0 iface wlan0 inet static address 192.168.1.10 gateway 192.168.1.1 netmask 255.255.255.0 network 192.168.1.0 dns-nameservers 193.2.1.66 wpa-driver wext wpa-ssid lausi wpa-ap-scan 2 wpa-proto RSN wpa-pairwise CCMP wpa-group CCMP wpa-key-mgmt WPA-PSK wpa-psk 8952a447c860d13847ba1cabd15314ba9caf2fb207f19598f90c43fcd43c0d97 But my wireless doesnt work when i use command /etc/init.d/networking restart and when i do this i get an error: * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... RTNETLINK answers: File exists Failed to bring up eth0. ioctl[SIOCSIWENCODEEXT]: Invalid argument ioctl[SIOCSIWENCODEEXT]: Invalid argument RTNETLINK answers: File exists Failed to bring up wlan0. Although it clearly states that my eth0 interface couldn't be brought to life it is working! But i cant say this for the wlan0 interface which doesn't even work if i unplug internet cable and again use command /etc/init.d/networking restart. This seems weird to me... When i use ìfconfig -a command i get an output which confirms that wlan0 isnt working and eth0 is. ziga@ziga-cq56:/etc/network$ ifconfig -a eth0 Link encap:Ethernet HWaddr 60:eb:69:6f:5f:69 inet addr:192.168.1.11 Bcast:192.168.1.13 Mask:255.255.255.0 inet6 addr: fe80::62eb:69ff:fe6f:5f69/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:6764 errors:0 dropped:0 overruns:0 frame:0 TX packets:6641 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:5932190 (5.9 MB) TX bytes:1331846 (1.3 MB) Interrupt:42 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1759 errors:0 dropped:0 overruns:0 frame:0 TX packets:1759 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:107772 (107.7 KB) TX bytes:107772 (107.7 KB) wlan0 Link encap:Ethernet HWaddr 70:f3:95:e7:57:cc inet addr:192.168.1.10 Bcast:192.168.1.12 Mask:255.255.255.0 BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) How can i make my wlan0 interface work? It had been working previously with network manager and wicd...

    Read the article

  • Remote Graphics Diagnostics with Windows RT 8.1 and Visual Studio 2013

    - by Michael B. McLaughlin
    Originally posted on: http://geekswithblogs.net/mikebmcl/archive/2013/11/12/remote-graphics-diagnostics-with-windows-rt-8.1-and-visual-studio.aspxThis blog post is a brief follow up to my What’s New in Graphics and Game Development in Visual Studio 2013 post on the MVP Award blog. While writing that post I was testing out various features to try to make sure everything worked as expected. I had some trouble getting Remote Graphics Diagnostics (a/k/a remote graphics debugging) working on my first generation Surface RT (upgraded to Windows RT 8.1). It was more strange since I could use remote debugging when doing CPU debugging; it was just graphics debugging that was causing trouble. After some discussions with the great folks who work on the graphics tools in Visual Studio, they were able to repro the problem and recommend a solution. My Surface RT needed the ARM Kits policy installed on it. Once I followed the instructions on the previous link, I could successfully use Remote Graphics Diagnostics on my Surface RT. Please note that this requires Windows RT 8.1 RTM (i.e. not Preview) and that Remote Graphics Diagnostics on ARM only works when you are using Visual Studio 2013 as it is a new feature (it should work just fine using the Express for Windows version). Also, when I installed the ARM Kits policy I needed to do two things to get it to work properly. First, when following the “How to install the Kits policy” instructions, I needed to copy the SecureBoot folder into Program Files on my Surface RT (specifically, I copied the SecureBoot folder to “C:\Program Files\Windows Kits\8.1\bin\arm\” on my Surface RT, creating any necessary directories). It may work if it’s in any system folder; I didn’t test any others after I got it working. I had initially put it in my Downloads folder and tried installing it from there. When the machine restarted it displayed a worrisome error message. I repeatedly pressed the button that would allow me to retry and eventually the machine rebooted and managed to recover itself to its previous state. Second, I needed to install it as an Administrator. The instructions say that this might be necessary. For me it was. This is a Remote Graphics Diagnostics is a great new feature in Visual Studio 2013 so I definitely encourage all of you to check it out!

    Read the article

  • Is is possible to get a patch included in the current release? If so, how?

    - by Oli
    So a while back I reported a bug in Compiz's Place Window plugin. It's a fairly major regression for people affected by it: mainly those using Gnome-Fallback, judging by the reports. A patch surfaced a short time later. I created a PPA for testing and everybody involved so far is reporting the issues are fixed. It even fixes another bug. I've done testing with a standard Unity desktop and can say (for my testing) no adverse effects were visible. I want to get this pushed to Ubuntu right now for two main reasons: I'm selfish. I don't want to need to update my PPA every time a new version of Compiz is pushed to 12.04. I don't want Ubuntu users seeing their windows flying around because of a silly little bug. I want this patch pushed to Ubuntu's version of Compiz as soon as possible, so we can mark these bugs fixed and move on with our lives. Whose leg do I have to hump to get this pulled into Ubuntu right now? I don't maintain this project and it's an upstream thing but it's fairly integral to Ubuntu. I could go to Compiz but I imagine that if they accept the patch, it'll be months (at least a release) before it's anywhere near Ubuntu. And when I do find the right person, how can I make the process as slick as possible for them? I want them to see my request, go "Yup, that all looks great, done" and that be it. I don't want seventeen rounds of emails addressing aspects of the patch. More importantly, I don't want to waste their time either. And what do I have to provide them? My packaging skills are... lamentable. This was my first attempt at patching a package for redistribution so I've probably made every single packaging error known to man. Will they be happy with the original patch (so they can apply it themselves) or should I repackage things so the diff/changelog is a little cleaner (it took me a few goes and the versioning is all over the place). Note: This question is about Compiz but I'd prefer if answers could address other styles of package too so we have an authoritative and comprehensive thread of how to get things fixed.

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • Criteria for a programming language to be considered "mature"

    - by Giorgio
    I was recently reading an answer to this question, and I was struck by the statement "The language is mature". So I was wondering what we actually mean when we say that "A programming language is mature"? Normally, a programming language is initially developed out of a need, e.g. Try out / implement a new programming paradigm or a new combination of features that cannot be found in existing languages. Try to solve a problem or overcome a limitation of an existing language. Create a language for teaching programming. Create a language that solves a particular class of problems (e.g. concurrency). Create a language and an API for a special application field, e.g. the web (in this case the language might reuse a well-known paradigm, but the whole API must be new). Create a language to push your competitor out of the market (in this case the creator might want the new language to be very similar to an existing one, in order to attract developers to the new programming language and platform). Regardless of what the original motivation and scenario in which a language has been created, eventually some languages are considered mature. In my intuition, this means that the language has achieved (at least one of) its goals, e.g. "We can now use language X as a reliable tool for writing web applications." This is however a bit vague, so I wanted to ask what you consider the most important criteria (if any) that are applied when saying that a language is mature. IMPORTANT NOTE This question is (on purpose) language-agnostic because I am only interested in general criteria. Please write only language-agnostic answers and comments! I am not asking whether any specific "language X is mature" or "which programming languages can be considered mature", or whether "language X is more mature than language Y": please avoid posting any opinions or reference about any specific languages because these are out of the scope of this question. EDIT To make the question more precise, by criteria I mean such things as "tool support", "adoption by the industry", "stability", "rich API", "large user community", "successful application record", "standardization", "clean and uniform semantics", and so on.

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • Best practice for organizing/storing character/monster data in an RPG?

    - by eclecto
    Synopsis: Attempting to build a cross-platform RPG app in Adobe Flash Builder and am trying to figure out the best class hierarchy and the best way to store the static data used to build each of the individual "hero" and "monster" types. My programming experience, particularly in AS3, is embarrassingly small. My ultra-alpha method is to include a "_class" object in the constructor for each instance. The _class, in turn, is a static Object pulled from a class created specifically for that purpose, so things look something like this: // Character.as package { public class Character extends Sprite { public var _strength:int; // etc. public function Character(_class:Object) { _strength = _class._strength; // etc. } } } // MonsterClasses.as package { public final class MonsterClasses extends Object { public static const Monster1:Object={ _strength:50, // etc. } // etc. } } // Some other class in which characters/monsters are created. // Create a new instance of Character var myMonster = new Character(MonsterClasses.Monster1); Another option I've toyed with is the idea of making each character class/monster type its own subclass of Character, but I'm not sure if it would be efficient or even make sense considering that these classes would only be used to store variables and would add no new methods. On the other hand, it would make creating instances as simple as var myMonster = new Monster1; and potentially cut down on the overhead of having to read a class containing the data for, at a conservative preliminary estimate, over 150 monsters just to fish out the one monster I want (assuming, and I really have no idea, that such a thing might cause any kind of slowdown in execution). But long story short, I want a system that's both efficient at compile time and easy to work with during coding. Should I stick with what I've got or try a different method? As a subquestion, I'm also assuming here that the best way to store data that will be bundled with the final game and not read externally is simply to declare everything in AS3. Seems to me that if I used, say, XML or JSON I'd have to use the associated AS3 classes and methods to pull in the data, parse it, and convert it to AS3 object(s) anyway, so it would be inefficient. Right?

    Read the article

  • Website directory structure regarding subdomains, www, and "global" content

    - by Pawnguy7
    I am trying to make a homemade HTTP server. It occurs to me, though, I never fully understood what you might call "relativity" among web pages. I have come across that www. is a subdomain, and I understand its original purpose. IT sounds like, in general, you would redirect (is that 301 or 302?) it to a... non-subdomain, sort of. As in, redirecting www.example.com to example.com. I am not entirely sure how to make this work when retrieving files for an HTTP server though. I would assume that example.com would be the root, and www manifests as a folder within it. I am unsure. There is also the question of multi-level subdomains, e.g. subdomain2.subdomain1.example.com. It seems to me they are structured "backwards", where you go from the root left in folder structure. In this situation, subdomain2 is a directory within subdomain1, which is a directory in the root. Finally, it occurs to me I might want a sort of global location. For example, maybe all subdomains still use an image as a logo. It makes more sense to me that there is one image, rather than each having a copy. In the same way, albiet more doubtfully, you might have global CSS (though that is a bit contrary to the idea of a subdomain in the first place), or a javascript that is commonly used. (more efficient than each having its copy and better for organization purposes). Finally, mabye you have a global 404 page. I think this might be the case where you have user-created subdomains (say bloggername.example.com), where example.com still has a default 404 when either a subdomain does not exist or page does not exists under a valid blogger. I am confused on what the directory structure for this should be. To summarize: Should and how it have a global files not in a subdirectory, how should www. be handled, (or how a now www or other subdomain should be handled), and the pattern for root/subdomain, as well as subdomain within subdomains (order-wise). Sorry this is multiple questions, but I feel at the root they are all related to the directory.

    Read the article

  • I'm graduating with a Computer Science degree but I don't feel like I know how to program.

    - by wp123
    I'm graduating with a Computer Science degree but I see websites like Stack Overflow and search engines like Google and don't know where I'd even begin to write something like that. During one summer I did have the opportunity to work as a iPhone developer, but I felt like I was mostly gluing together libraries that other people had written with little understanding of the mechanics happening beneath the hood. I'm trying to improve my knowledge by studying algorithms, but it is a long and painful process. I find algorithms difficult and at the rate I am learning a decade will have passed before I will master the material in the book. Given my current situation, I've spent a month looking for work but my skills (C, Python, Objective-C) are relatively shallow and are not so desirable in the local market, where C#, Java, and web development are much higher in demand. That is not to say that C and Python opportunities do not exist but they tend to demand 3+ years of experience I do not have. My GPA is OK (3.0) but it's not high enough to apply to the large companies like IBM or return for graduate studies. Basically I'm graduating with a Computer Science degree but I don't feel like I've learned how to program. I thought that joining a company and programming full-time would give me a chance to develop my skills and learn from those more experienced than myself, but I'm struggling to find work and am starting to get really frustrated. I am going to cast my net wider and look beyond the city I've grown up in, but what have other people in similar situation tried to do? I've worked hard but don't have the confidence to go out on my own and write my own app. (That is, become an indie developer in the iPhone app market.) If nothing turns up I will need to consider upgrading and learning more popular skills or try something marginally related like IT, but given all the effort I've put in that feels like copping out. EDIT: Thank you for all the advice. I think I was premature because of unrealistic expectations but the comments have given me a dose of reality. I will persevere and continue to code. I have a project in mind, although well beyond my current capabilities it will challenge me to hone my craft and prove my worth to myself (and potential employers). Had I known there was a career overflow I would have posted there instead. Thanks again!

    Read the article

  • Forced to be trained [closed]

    - by steeb
    is it OK to force employees to take a training in order to make them sign a contract to stay in the job for X years? Here I'm the only developer in the company and before this job I've developed in VB6, VB.Net and others. There are others technicians here but they query the database server to make reports and make little programs but do not spend all day in programming. I was hired here to manage some legacy data to be migrated to a new system and I used my experience to accomplish the task. I also have made some utilities and other developments. Since we went live with the new system I've been developing a lot of side programs, add-ons and changed the source code of it and basically I've been learning from it since then and became some sort of jack of all trades when some feature needs to be changed or corrected. I have one and a half year in this company but during the last 8 months I've been entirely programming in the system. There have been times when even the implementers ask me how I accomplish certain things. The issue here is that the company has come and told me and other co-workers (which do not program in the system but know basic programming and databases) to have a basic training to program for the new system's platform and when we finish it we will sign a contract to stay in the company for an unspecified time. They have offered also an unspecified better salary. I'm feeling very suspicious cause I know the basics of system and I don't understand why I have to take it, being known by everybody all developments I've made. I gathered with my boss and told him that I should not take that course because I feel that I can learn more with all the daily requirements I've been asked and they should save that money. But he responded me that I have to take it and sign after that. I think they want me to be with them for a long time, and I'd like to, but my opinion is that I would like to stay in the company because I feel comfortable in all laboral aspects (including salary) but not because I signed a contract forced to do more tasks and forced to say no if I have others and better job offers. What advice can you give me in this kind of situation?

    Read the article

  • Why do some user agents have spam urls in them (and why are they always Opera/Presto User-Agents)?

    - by Erx_VB.NExT.Coder
    If you go to (say) the last 100 entries (visits) to the botsvsbrowsers.com website (exact link, feel free to take a look: http://www.botsvsbrowsers.com/recent/listings/index.html ), you'd notice that almost every User Agent that has the keywords "Opera" and "Presto" inside them, will almost certainly have a web link (URL/Web Address) inside it, and it won't just be a normal web address, but a HTML anchor tag/link to that address. Why is this so, I could not even find a single discussion about it on the internet, nowhere, I tried varying my search terms many times. If the user agent contains the words "Opera" and "Presto" it doesnt mean it will have this weblink, but it means there is about an 80% change that it will. A typical anchor tag/link inside a user agent will look like this: Mozilla/4.0 <a href="http://osis-uk.co.uk/disabled-equipment">disability equipment</a> (Windows NT 5.1; U; en) Presto/2.10.229 Version/11.60 If you check it out at the website, http://www.botsvsbrowsers.com/recent/listings/index.html you will notice that the back and forward arrows are in there unescaped format. This isn't just true for botsvsbrowsers, but several other user agent listing sites. I'm really confused and feel line I'm in a room full of 10,000 people and am the only one seeing this ghost :). If I'm doing statistical analysis, should I include or exclude this type of user agent from my listing (ie: are these just normal users who've set their user agents to attempt to drive some traffic to their sites as they browser the web), or is there something else going on? The fact that it is so consistent in terms of its format leads me to believe that it is an automated process (the setting or alteration of the user agent) so I cannot decide or understand the process by which this change is made (I know how to change a user agent), but unsure which program or facility is doing this, especially since it is exclusive to Opera (Presto) user agents that are beyond I think an 8 or 9 point something browser version. I've run some statistical tests, parsing entries from all over the place, writing custom programs, to get a better understanding of this. Keep in mind that I see normal URL's in user agents infrequently, they are just text such as +http://www.someSite.com appended to a user agent normally, especially if its a crawler or bot it provided its service URL, this is normal and isnt done with an embedded link (A HREF=) etc, so I'm not talking about "those".

    Read the article

  • Advantages to Multiple Methods over Switch

    - by tandu
    I received a code review from a senior developer today asking "By the way, what is your objection to dispatching functions by way of a switch statement?" I have read in many places about how pumping an argument through switch to call methods is bad OOP, not as extensible, etc. However, I can't really come up with a definitive answer for him. I would like to settle this for myself once and for all. Here are our competing code suggestions (php used as an example, but can apply more universally): class Switch { public function go($arg) { switch ($arg) { case "one": echo "one\n"; break; case "two": echo "two\n"; break; case "three": echo "three\n"; break; default: throw new Exception("Unknown call: $arg"); break; } } } class Oop { public function go_one() { echo "one\n"; } public function go_two() { echo "two\n"; } public function go_three() { echo "three\n"; } public function __call($_, $__) { throw new Exception("Unknown call $_ with arguments: " . print_r($__, true)); } } Part of his argument was "It (switch method) has a much cleaner way of handling default cases than what you have in the generic __call() magic method." I disagree about the cleanliness and in fact prefer call, but I would like to hear what others have to say. Arguments I can come up with in support of Oop scheme: A bit cleaner in terms of the code you have to write (less, easier to read, less keywords to consider) Not all actions delegated to a single method. Not much difference in execution here, but at least the text is more compartmentalized. In the same vein, another method can be added anywhere in the class instead of a specific spot. Methods are namespaced, which is nice. Does not apply here, but consider a case where Switch::go() operated on a member rather than a parameter. You would have to change the member first, then call the method. For Oop you can call the methods independently at any time. Arguments I can come up with in support of Switch scheme: For the sake of argument, cleaner method of dealing with a default (unknown) request Seems less magical, which might make unfamiliar developers feel more comfortable Anyone have anything to add for either side? I'd like to have a good answer for him.

    Read the article

  • Sampling SQL server batch activity

    - by extended_events
    Recently I was troubleshooting a performance issue on an internal tracking workload and needed to collect some very low level events over a period of 3-4 hours.  During analysis of the data I found that a common pattern I was using was to find a batch with a duration that was longer than average and follow all the events it produced.  This pattern got me thinking that I was discarding a substantial amount of event data that had been collected, and that it would be great to be able to reduce the collection overhead on the server if I could still get all activity from some batches. In the past I’ve used a sampling technique based on the counter predicate to build a baseline of overall activity (see Mikes post here).  This isn’t exactly what I want though as there would certainly be events from a particular batch that wouldn’t pass the predicate.  What I need is a way to identify streams of work and select say one in ten of them to watch, and sql server provides just such a mechanism: session_id.  Session_id is a server assigned integer that is bound to a connection at login and lasts until logout.  So by combining the session_id predicate source and the divides_by_uint64 predicate comparator we can limit collection, and still get all the events in batches for investigation. CREATE EVENT SESSION session_10_percent ON SERVER ADD EVENT sqlserver.sql_statement_starting(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info_external (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlserver.sql_statement_completed(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))) ADD TARGET ring_buffer WITH (MAX_DISPATCH_LATENCY=30 SECONDS,TRACK_CAUSALITY=ON) GO   There we go; event collection is reduced while still providing enough information to find the root of the problem.  By the way the performance issue turned out to be an IO issue, and the session definition above was more than enough to show long waits on PAGEIOLATCH*.        

    Read the article

  • What OpenGL version(s) to learn and/or use?

    - by zuko
    So, I'm new to OpenGL... I have general knowledge of game programming but little practical experience. I've been looking into various articles and books and trying to dive into OpenGL, but I've found the various versions and old vs new way of doing things confusing. I guess my first questions is does anyone know some figures about percentages of gamers that can run each version of OpenGL. What's the market share like? 2.x, 3.x, 4.x... I looked into the requirements for Half Life 2 since I know Valve updated it with OpenGL to run on Mac and I know they usually try to hit a very wide user-base, and they say a minimum of GeForce 8 Series. I looked at the 8800 GT on Nvidia's website and it listed support for OpenGL 2.1. Which, maybe I'm wrong, sounds ancient to me since there's already 4.x. I looked up a driver for 8800GT and it says it supports 4.2! A bit of a discrepancy there, lol. I've also read things like XP only supports up to a certain version, or OS X only supports 3.2, or all kinds of other things. Overall, I'm just confused as to how much support there is for various versions and what version to learn/use. I'm also looking for learning resources. My search results thus far have pointed me to the OpenGL SuperBible. The 4th edition has great reviews on Amazon, but it teaches 2.1. The 5th edition teaches 3.3 and there are a couple things in the reviews that mention the 4th edition is better and that the 5th edition doesn't properly teach the new features or something? Basically, even within learning material I'm seeing discrepancies and I just don't even know where to start. From what I understand, 3.x started a whole new way of doing things and I've read from various articles and reviews that you want to "stay away from deprecated features like glBegin(), glEnd()" yet a lot of books and tutorials I've seen use that method. I've seen people saying that, basically, the new way of doing stuff is more complicated yet the old way is bad . Just a side note, personally, I know I still have a lot to learn beforehand, but I'm interested in tessellation; so I guess that factors into it as well, because, as far as I understand that's only in 4.x? [just btw, my desktop supports 4.2]

    Read the article

  • How best to look up objects by label?

    - by dsollen
    I am writing the server backed by a pre-written API. I'm going to get a number of strings representing ports, signals, paths, etc etc etc. I need to look up the object associated with a given label, these objects are all in memory (no sql magic to do this for me). My question is, how best do I associate a given unique label with the mutable object it represents? I have enough objects that looking through every signal or every port to find the one that matches is possible, but may be slightly too slow. To be honest the direct 'look at every object' method is probably good enough for so small a body of objects and anything else is premature optimization, but I still am curious what the proper solution would be if I thought my signals were going to grow a bit larger. As I see it there are two options available. First would be to to create a 'store' that is a simple map between object and label. I could have it so that every time I call addObject the object is automatically saved into a hashmap or the like. This works, but relies on my properly adding and deleting each object so the map doesn't grow indefinitely. The biggest issue to me is that this involves having some hidden static map in my ModelObject class that just feels...wrong somehow. The other option is to have some method that can interpret the labels. All of these labels are derived from the underlying objects. So I can look at the signal label, for instance, and say "these 20 characters are the port" to figure out what port I need. This would allow me to quickly figure out what I need. However, if the label method is changed the translateLabelToObject method needs to be updated as well or everything breaks. Which solution is cleaner, or possibly a cleaner solution than either of above? For the record I'm working with sufficient number of variables to make direct comparison a little slow, but not enough to be concerned about memory overhead, written in java. All objects that have labels I need to look up extend the same parent class.

    Read the article

  • Rebuilding a Mac Mini (early 2009)

    - by Kelly Jones
    This weekend I decided to rebuild the family’s Mac Mini.  It’s the early 2009 model and I hadn’t done it since we got it in March of 2009.  Even worse, I had done the import data step (or whatever Apple calls it) which brought over all of the data files and apps from our previous Mac.  AND that install goes back to before 2005, as far as I can remember.  SO, to say that “cruft” had built up in the operating system, is probably a bit of an understatement. The rebuild went pretty smoothly, especially since I had a couple of spare hard drives.  I hooked up a spare USB drive and formatted it for use with the Mac.  I then used Carbon Copy to clone the internal hard drive onto the USB drive.  (Carbon Copy is a great little app that I used several years ago and I was happy to see it was not only still around, but updated as well.) Once I had my backup, I shut down the Mac and replaced the internal hard drive.  I had purchased the hard drive last fall to use with my work laptop, but I got a new work laptop (with awesome dual SSDs) so I wasn’t using it anymore.  The replacement drive (Seagate Momentus 7200.4 ST9500420AS 500GB 7200 RPM 2.5" SATA 3.0Gb/s Internal Notebook Hard Drive) has more than double the original’s capacity and is also faster.  I’ll have to keep an eye on the temperature, since that 7200 drive will run hotter. Opening the Mac Mini is not for the easily intimidated!  That cool little case is quite the pain to open.  Luckily, OWC put a video together here.  After replacing the drive, I then installed a clean copy of OS 10.5 using the DVDs that came with the Mac.  After the OS, it was time to reinstall the apps.  I downloaded some of the freeware, just to make sure I had the latest versions.  For the rest, I just copied from the backup cloned drive to the new drive.  (I love the way most Mac apps are written – with almost everything contained within a “package” that I can just copy from one drive to another.  MUCH better than the Windows way of using shared DLLs and the registry to store critical pieces that the app needs in order to run!) The whole process took longer than I would have preferred, but it was long overdue.  It definitely “feels” faster, especially boot time and application launches.

    Read the article

  • reference list for non-IT driven algorithmic patterns

    - by Quicker
    I am looking for a reference list for non-IT driven algorithmic patterns (which still can be helped with IT implementations of IT). An Example List would be: name; short desc; reference Travelling Salesman; find the shortest possible route on a multiple target path; http://en.wikipedia.org/wiki/Travelling_salesman_problem Ressource Disposition (aka Regulation); Distribute a limited/exceeding input on a given number output receivers based on distribution rules; http://database-programmer.blogspot.de/2010/12/critical-analysis-of-algorithm-sproc.html If there is no such list, but you instantly think of something specific, please 'put it on the desk'. Maybe I can compile something out of the input I get here (actually I am very frustrated as I did not find any such list via research by myself). Details on Scoping: I found it very hard to formulate what I want in a way everything is out that I do not need (which may be the issue why I did not find anything at google). There is a database centric definition for what I am looking for in the section 'Processes' of the second example link. That somehow fits, but the database focus sort of drifts away from the pattern thinking, which I have in mind. So here are my own thoughts around what's in and what's out: I am NOT looking for a foundational algo ref list, which is implemented as basis for any programming language. Eg. the php reference describes substr and strlen. That implements algos, but is not what I am looking for. the problem the algo does address would even exist, if there were no computers (or other IT components) the main focus of the algo is NOT to help other algo's chances are high, that there are implementions of the solution or any workaround without any IT support out there in the world however the algo could be benefitialy implemented/fully supported by a software application = means: the problem of the algo has to be addressed anyway, but running an algo implementation with software automates the process (that is why I posted it on stackoverflow and not somewhere else) typically such algo implementations have more than one input field value and more than one output field value - which implies it could not be implemented as simple function (which is fixed to produce not more than one output value) in a normalized data model often times such algo implementation outputs span accross multiple rows (sometimes multiple tables), whereby the number of output rows depends on the input paraters and rows in the table(s) at start time - which implies that any algo implementation/procedure must interact with a database (read and/or write) I am mainly looking for patterns, not for specific implementations. Example: The Travelling Salesman assumes any coordinates, however it does not say: You need a table targets with fields x and y. - however sometimes descriptions are focussed on examples with specific implementations very much - no worries, as long as the pattern gets clear

    Read the article

  • while(true) and loop-breaking - anti-pattern?

    - by KeithS
    Consider the following code: public void doSomething(int input) { while(true) { TransformInSomeWay(input); if(ProcessingComplete(input)) break; DoSomethingElseTo(input); } } Assume that this process involves a finite but input-dependent number of steps; the loop is designed to terminate on its own as a result of the algorithm, and is not designed to run indefinitely (until cancelled by an outside event). Because the test to see if the loop should end is in the middle of a logical set of steps, the while loop itself currently doesn't check anything meaningful; the check is instead performed at the "proper" place within the conceptual algorithm. I was told that this is bad code, because it is more bug-prone due to the ending condition not being checked by the loop structure. It's more difficult to figure out how you'd exit the loop, and could invite bugs as the breaking condition might be bypassed or omitted accidentally given future changes. Now, the code could be structured as follows: public void doSomething(int input) { TransformInSomeWay(input); while(!ProcessingComplete(input)) { DoSomethingElseTo(input); TransformInSomeWay(input); } } However, this duplicates a call to a method in code, violating DRY; if TransformInSomeWay were later replaced with some other method, both calls would have to be found and changed (and the fact that there are two may be less obvious in a more complex piece of code). You could also write it like: public void doSomething(int input) { var complete = false; while(!complete) { TransformInSomeWay(input); complete = ProcessingComplete(input); if(!complete) { DoSomethingElseTo(input); } } } ... but you now have a variable whose only purpose is to shift the condition-checking to the loop structure, and also has to be checked multiple times to provide the same behavior as the original logic. For my part, I say that given the algorithm this code implements in the real world, the original code is the most readable. If you were going through it yourself, this is the way you'd think about it, and so it would be intuitive to people familiar with the algorithm. So, which is "better"? is it better to give the responsibility of condition checking to the while loop by structuring the logic around the loop? Or is it better to structure the logic in a "natural" way as indicated by requirements or a conceptual description of the algorithm, even though that may mean bypassing the loop's built-in capabilities?

    Read the article

  • Best Practice Method for Including Images in a DataGrid using MVVM

    - by Killercam
    All, I have a WPF DataGrid. This DataGrid shows files ready for compilation and should also show the progress of my compiler as it compiles the files. The format of the DataGrid is Image|File Path|State -----|---------|----- * |C:\AA\BB |Compiled & |F:PP\QQ |Failed > |G:HH\LL |Processing .... The problem is the image column (the *, &, and are for representation only). I have a ResourceDictionary that contains hundreds of vector images as Canvas objects: <ResourceDictionary xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"> <Canvas x:Key="appbar_acorn" Width="48" Height="48" Clip="F1 M 0,0L 48,0L 48,48L 0,48L 0,0"> <Path Width="22.3248" Height="25.8518" Canvas.Left="13.6757" Canvas.Top="11.4012" Stretch="Fill" Fill="{DynamicResource BlackBrush}" Data="F1 M 16.6309,18.6563C 17.1309,8.15625 29.8809,14.1563 29.8809,14.1563C 30.8809,11.1563 34.1308,11.4063 34.1308,11.4063C 33.5,12 34.6309,13.1563 34.6309,13.1563C 32.1309,13.1562 31.1309,14.9062 31.1309,14.9062C 41.1309,23.9062 32.6309,27.9063 32.6309,27.9062C 24.6309,24.9063 21.1309,22.1562 16.6309,18.6563 Z M 16.6309,19.9063C 21.6309,24.1563 25.1309,26.1562 31.6309,28.6562C 31.6309,28.6562 26.3809,39.1562 18.3809,36.1563C 18.3809,36.1563 18,38 16.3809,36.9063C 15,36 16.3809,34.9063 16.3809,34.9063C 16.3809,34.9063 10.1309,30.9062 16.6309,19.9063 Z "/> </Canvas> </ResourceDictionary> Now, I want to be able to include these in my image column and change them at run-time. I was going to attempt to do this by setting up a property in my View Model that was of type Image and binding this to my View via: <DataGrid.Columns> <DataGridTemplateColumn Header="" Width="SizeToCells" IsReadOnly="True"> <DataGridTemplateColumn.CellTemplate> <DataTemplate> <Image Source="{Binding Canvas}"/> </DataTemplate> </DataGridTemplateColumn.CellTemplate> </DataGridTemplateColumn> </DataGrid.Columns> Where in the View Model I have the appropriate property. Now, I was told this is not 'pure' MVVM. I don't fully accept this, but I want to know if there is a better way of doing this. Say, binding to an enum and using a converter to get the image? Any advice would be appreciated.

    Read the article

  • C Programming arrays, I dont understand how I would go about making this program, If anyone can just guide me through the basic outline please :) [on hold]

    - by Rashmi Kohli
    Problem The temperature of a car engine has been measured, from real-world experiments, as shown in the table and graph below: Time (min) Temperature (oC) 0 20 1 36 2 61 3 68 4 77 5 110 Use linear regression to find the engine’s temperature at 1.5 minutes, 4.3 minutes, and any other time specified by the user. Background In engineering, many times we measure several data points in an experiment, but then we need to predict a value that we have not measured which lies between two measured values, such as the problem statement above. If the relation between the measured parameters seems to be roughly linear, then we can use linear regression to find the relationship between those parameters. In the graph of the problem statement above, the relation seems to be roughly linear. Hence, we can apply linear regression to the above problem. Assuming y {y0, y1, …yn-1} has a linear relation with x {x0, x1, … xn-1}, we can say that: y = mx+b where m and b can be found with linear regression as follows: For the problem in this lab, using linear regression gives us the following line (in blue) compared to the measured curve (in red). As you can see, there is usually a difference between the measured values and the estimated (predicted) values. What linear regression does is to minimize those differences and still give us a straight line (blue). Other methods, such as non-linear regression, are also possible to achieve higher accuracy and better curve fitting. Requirements Your program should first print the table of the temperatures similar to the way it’s printed in the problem statement. It should then calculate the temperature at minute 1.5 and 4.3 and show the answers to the user. Next, it should prompt the user to enter a time in minutes (or -1 to quit), and after reading the user’s specified time it should give the value of the engine’s temperature at that time. It should then go back to the prompt. Hints •Use a one dimensional array to store the temperature values given in the problem statement. •Use functions to separate tasks such as calculating m, calculating b, calculating the temperature at a given time, printing the prompt, etc. You can then give your algorithm as well as you pseudo code per function, as opposed to one large algorithm diagram or one large sequence of pseudo code.

    Read the article

< Previous Page | 612 613 614 615 616 617 618 619 620 621 622 623  | Next Page >