Search Results

Search found 13120 results on 525 pages for 'business layer'.

Page 205/525 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • 5 Lessons learnt in localization / multi language support in WPF

    - by MarkPearl
    For the last few months I have been secretly working away at the second version of an application that we initially released a few years ago. It’s called MaxCut and it is a free panel/cut optimizer for the woodwork, glass and metal industry. One of the motivations for writing MaxCut was to get an end to end experience in developing an application for general consumption. From the early days of v1 of MaxCut I would get the odd email thanking me for the software and then listing a few suggestions on how to improve it. Two of the most dominant suggestions that we received were… Support for imperial measurements (the original program only supported the metric system) Multi language support (we had someone who volunteered to translate the program into Japanese for us). I am not going to dive into the Imperial to Metric support in todays blog post, but I would like to cover a few brief lessons we learned in adding support for multi-language functionality in the software. I have sectioned them below under different lessons. Lesson 1 – Build multi-language support in from the start So the first lesson I learnt was if you know you are going to do multi language support – build it in from the very beginning! One of the power points of WPF/Silverlight is data binding in XAML and so while it wasn’t to painful to retro fit multi language support into the programing, it was still time consuming and a bit tedious to go through mounds and mounds of views and would have been a minor job to have implemented this while the form was being designed. Lesson 2 – Accommodate for varying word lengths using Grids The next lesson was a little harder to learn and was learnt a bit further down the road in the development cycle. We developed everything in English, assuming that other languages would have similar character length words for equivalent meanings… don’t!. A word that is short in your language may be of varying character lengths in other languages. Some language like Dutch and German allow for concatenation of nouns which has the potential to create really long words. We picked up a few places where our views had been structured incorrectly so that if a word was to long it would get clipped off or cut out. To get around this we began using the WPF grid extensively with column widths that would automatically expand if they needed to. Generally speaking the grid replacement got round this hurdle, and if in future you have a choice between a stack panel or a grid – think twice before going for the easier option… often the grid will be a bit more work to setup, but will be more flexible. Lesson 3 – Separate the separators Our initial run through moving the words to a resource dictionary led us to make what I thought was one potential mistake. If we had a label like the following… “length : “ In the resource dictionary we put it as a single entry. This is fine until you start using a word more than once. For instance in our scenario we used the word “length’ frequently. with different variations of the word with grammar and separators included in the resource we ended up having what I would consider a bloated dictionary. When we removed the separators from the words and put them as their own resources we saw a dramatic reduction in dictionary size… so something that looked like this… “length : “ “length. “ “length?” Was reduced to… “length” “:” “?” “.” While this may not seem like a reduction at first glance, consider that the separators “:?.” are used everywhere and suddenly you see a real reduction in bloat. Lesson 4 – Centralize the Language Dictionary This lesson was learnt at the very end of the project after we had already had a release candidate out in the wild. Because our translations would be done on a volunteer basis and remotely, we wanted it to be really simple for someone to translate our program into another language. As a common design practice we had tiered the application so that we had a business logic layer, a ui layer, etc. The problem was in several of these layers we had resource files specific for that layer. What this resulted in was us having multiple resource files that we would need to send to our translators. To add to our problems, some of the wordings were duplicated in different resource files, which would result in additional frustration from our translators as they felt they were duplicating work. Eventually the workaround was to make a separate project in VS2010 with just the language translations. We then exposed the dictionary as public within this project and made it as a reference to the other projects within the solution. This solved out problem as now we had a central dictionary and could remove any duplication's. Lesson 5 – Make a dummy translation file to test that you haven’t missed anything The final lesson learnt about multi language support in WPF was when checking if you had forgotten to translate anything in the inline code, make a test resource file with dummy data. Ideally you want the data for each word to be identical. In our instance we made one which had all the resource key values pointing to a value of test. This allowed us point the language file to our test resource file and very quickly browse through the program and see if we had missed any linking. The alternative to this approach is to have two language files and swap between the two while running the program to make sure that you haven’t missed anything, but the downside of dual language file approach is that it is much a lot harder spotting a mistake if everything is different – almost like playing Where’s Wally / Waldo. It is much easier spotting variance in uniformity – meaning when you put the “test’ keyword for everything, anything that didn’t say “test” stuck out like a sore thumb. So these are my top five lessons learnt on implementing multi language support in WPF. Feel free to make any suggestions in the comments section if you feel maybe something is more important than one of these or if I got it wrong!

    Read the article

  • Games at Work Part 1: Introduction to Gamification and Applications

    - by ultan o'broin
    Games Are Everywhere How many of you (will admit to) remember playing Pong? OK then, do you play Angry Birds on your phone during work hours? Thought about why we keep playing online, video, and mobile games and what this "gamification" business we're hearing about means for the enterprise applications user experience? In Reality Is Broken: Why Games Make Us Better and How They Can Change the World, Jane McGonigal says that playing computer and online games now provides more rewards for people than their real lives do. Games offer intrinsic rewards and happiness to the players as they pursue more satisfying work and the success, social connection, and meaning that goes with it. Yep, Gran Turismo, Dungeons & Dragons, Guitar Hero, Mario Kart, Wii Boxing, and the rest are all forms of work it seems. Games are, in fact, work taken so seriously that governments now move to limit the impact of virtual gaming currencies on the real financial system. Anyone who spends hours harvesting crops on FarmVille realizes it’s hard work too. Yet games evoke a positive emotion in players who voluntarily stay engaged with games for hours, day after day. Some 183 million active gamers in the United States play on average 13 hours per week. Weekly, 5 million of those gamers play for longer than a working week (45 hours). So why not harness the work put into games to solve real-world problems? Or, in the case of our applications users, real-world work problems? What’s a Game? Jane explains that all games have four defining traits: a goal, rules, a feedback system, and voluntary participation. We need to look at what motivational ideas behind the dynamics of the game—what we call gamification—are appropriate for our users. Typically, these motivators are achievement, altruism, competition, reward, self-expression, and status). Common game techniques for leveraging these motivations include: Badging and avatars Points and awards Leader boards Progress charts Virtual currencies or goods Gifting and giving Challenges and quests Some technology commentators argue for a game layer on top of everything, but this layer is already part of our daily lives in many instances. We see gamification working around us already: the badging and kudos offered on My Oracle Support or other Oracle community forums, becoming a Dragon Slayer implementor of Atlassian applications, being made duke of your favorite coffee shop on Yelp, sharing your workout details with Nike+, or donating to Japanese earthquake relief through FarmVille, for example. And what does all this mean for the applications that you use in your work? Read on in part two...

    Read the article

  • Computer Networks UNISA - Chap 10 &ndash; In Depth TCP/IP Networking

    - by MarkPearl
    After reading this section you should be able to Understand methods of network design unique to TCP/IP networks, including subnetting, CIDR, and address translation Explain the differences between public and private TCP/IP networks Describe protocols used between mail clients and mail servers, including SMTP, POP3, and IMAP4 Employ multiple TCP/IP utilities for network discovery and troubleshooting Designing TCP/IP-Based Networks The following sections explain how network and host information in an IPv4 address can be manipulated to subdivide networks into smaller segments. Subnetting Subnetting separates a network into multiple logically defined segments, or subnets. Networks are commonly subnetted according to geographic locations, departmental boundaries, or technology types. A network administrator might separate traffic to accomplish the following… Enhance security Improve performance Simplify troubleshooting The challenges of Classful Addressing in IPv4 (No subnetting) The simplest type of IPv4 is known as classful addressing (which was the Class A, Class B & Class C network addresses). Classful addressing has the following limitations. Restriction in the number of usable IPv4 addresses (class C would be limited to 254 addresses) Difficult to separate traffic from various parts of a network Because of the above reasons, subnetting was introduced. IPv4 Subnet Masks Subnetting depends on the use of subnet masks to identify how a network is subdivided. A subnet mask indicates where network information is located in an IPv4 address. The 1 in a subnet mask indicates that corresponding bits in the IPv4 address contain network information (likewise 0 indicates the opposite) Each network class is associated with a default subnet mask… Class A = 255.0.0.0 Class B = 255.255.0.0 Class C = 255.255.255.0 An example of calculating  the network ID for a particular device with a subnet mask is shown below.. IP Address = 199.34.89.127 Subnet Mask = 255.255.255.0 Resultant Network ID = 199.34.89.0 IPv4 Subnetting Techniques Subnetting breaks the rules of classful IPv4 addressing. Read page 490 for a detailed explanation Calculating IPv4 Subnets Read page 491 – 494 for an explanation Important… Subnetting only applies to the devices internal to your network. Everything external looks at the class of the IP address instead of the subnet network ID. This way, traffic directed to your network externally still knows where to go, and once it has entered your internal network it can then be prioritized and segmented. CIDR (classless Interdomain Routing) CIDR is also known as classless routing or supernetting. In CIDR conventional network class distinctions do not exist, a subnet boundary can move to the left, therefore generating more usable IP addresses on your network. A subnet created by moving the subnet boundary to the left is known as a supernet. With CIDR also came new shorthand for denoting the position of subnet boundaries known as CIDR notation or slash notation. CIDR notation takes the form of the network ID followed by a forward slash (/) followed by the number of bits that are used for the extended network prefix. To take advantage of classless routing, your networks routers must be able to interpret IP addresses that don;t adhere to conventional network class parameters. Routers that rely on older routing protocols (i.e. RIP) are not capable of interpreting classless IP addresses. Internet Gateways Gateways are a combination of software and hardware that enable two different network segments to exchange data. A gateway facilitates communication between different networks or subnets. Because on device cannot send data directly to a device on another subnet, a gateway must intercede and hand off the information. Every device on a TCP/IP based network has a default gateway (a gateway that first interprets its outbound requests to other subnets, and then interprets its inbound requests from other subnets). The internet contains a vast number of routers and gateways. If each gateway had to track addressing information for every other gateway on the Internet, it would be overtaxed. Instead, each handles only a relatively small amount of addressing information, which it uses to forward data to another gateway that knows more about the data’s destination. The gateways that make up the internet backbone are called core gateways. Address Translation An organizations default gateway can also be used to “hide” the organizations internal IP addresses and keep them from being recognized on a public network. A public network is one that any user may access with little or no restrictions. On private networks, hiding IP addresses allows network managers more flexibility in assigning addresses. Clients behind a gateway may use any IP addressing scheme, regardless of whether it is recognized as legitimate by the Internet authorities but as soon as those devices need to go on the internet, they must have legitimate IP addresses to exchange data. When a clients transmission reaches the default gateway, the gateway opens the IP datagram and replaces the client’s private IP address with an Internet recognized IP address. This process is known as NAT (Network Address Translation). TCP/IP Mail Services All Internet mail services rely on the same principles of mail delivery, storage, and pickup, though they may use different types of software to accomplish these functions. Email servers and clients communicate through special TCP/IP application layer protocols. These protocols, all of which operate on a variety of operating systems are discussed below… SMTP (Simple Mail transfer Protocol) The protocol responsible for moving messages from one mail server to another over TCP/IP based networks. SMTP belongs to the application layer of the ODI model and relies on TCP as its transport protocol. Operates from port 25 on the SMTP server Simple sub-protocol, incapable of doing anything more than transporting mail or holding it in a queue MIME (Multipurpose Internet Mail Extensions) The standard message format specified by SMTP allows for lines that contain no more than 1000 ascii characters meaning if you relied solely on SMTP you would have very short messages and nothing like pictures included in an email. MIME us a standard for encoding and interpreting binary files, images, video, and non-ascii character sets within an email message. MIME identifies each element of a mail message according to content type. MIME does not replace SMTP but works in conjunction with it. Most modern email clients and servers support MIME POP (Post Office Protocol) POP is an application layer protocol used to retrieve messages from a mail server POP3 relies on TCP and operates over port 110 With POP3 mail is delivered and stored on a mail server until it is downloaded by a user Disadvantage of POP3 is that it typically does not allow users to save their messages on the server because of this IMAP is sometimes used IMAP (Internet Message Access Protocol) IMAP is a retrieval protocol that was developed as a more sophisticated alternative to POP3 The single biggest advantage IMAP4 has over POP3 is that users can store messages on the mail server, rather than having to continually download them Users can retrieve all or only a portion of any mail message Users can review their messages and delete them while the messages remain on the server Users can create sophisticated methods of organizing messages on the server Users can share a mailbox in a central location Disadvantages of IMAP are typically related to the fact that it requires more storage space on the server. Additional TCP/IP Utilities Nearly all TCP/IP utilities can be accessed from the command prompt on any type of server or client running TCP/IP. The syntaxt may differ depending on the OS of the client. Below is a list of additional TCP/IP utilities – research their use on your own! Ipconfig (Windows) & Ifconfig (Linux) Netstat Nbtstat Hostname, Host & Nslookup Dig (Linux) Whois (Linux) Traceroute (Tracert) Mtr (my traceroute) Route

    Read the article

  • How are video card\graphics drivers written exactly? [closed]

    - by Bigyellow Bastion
    I was thinking, since I want an application layer for my operating system, how or what would be the best way to write, or better understand how to write, my own graphics device drivers or software to enable application layered software to request data through my graphics system (I.e. windows, icons, mouse pointers). I don't need every little detail, I'd just like some possible insight on the procedure one may accomplish this feat in, perhaps the best way to understand the interaction between the graphics GUI rendering software to the hardware interaction directly, and the application software requesting the graphics driver itself.

    Read the article

  • What am I allowed to do programmatically with pictures that have a Creative Commons "don't modify" license

    - by nist
    I'm working on a project that uses some icons that are under a Creative Commons license (ND) that forbids modification of the picture. What can I do with this icons as a programmer? Can I modify the looks of the image in the program as long as I don't change anything in the file that contains the icon? Have I modified the image if I put a colored transparent layer over it so the color of the icon changes?

    Read the article

  • What is a good strategy for binding view objects to model objects in C++?

    - by B.J.
    Imagine I have a rich data model that is represented by a hierarchy of objects. I also have a view hierarchy with views that can extract required data from model objects and display the data (and allow the user to manipulate the data). Actually, there could be multiple view hierarchies that can represent and manipulate the model (e.g. an overview-detail view and a direct manipulation view). My current approach for this is for the controller layer to store a reference to the underlying model object in the View object. The view object can then get the current data from the model for display, and can send the model object messages to update the data. View objects are effectively observers of the model objects and the model objects broadcast notifications when properties change. This approach allows all the views to update simultaneously when any view changes the model. Implemented carefully, this all works. However, it does require a lot of work to ensure that no view or model objects hold any stale references to model objects. The user can delete model objects or sub-hierarchies of the model at any time. Ensuring that all the view objects that hold references to the model objects that have been deleted is time-consuming and difficult. It feels like the approach I have been taking is not especially clean; while I don't want to have to have explicit code in the controller layer for mediating the communication between the views and the model, it seems like there must be a better (implicit) approach for establishing bindings between the view and the model and between related model objects. In particular, I am looking for an approach (in C++) that understands two key points: There is a many to one relationship between view and model objects If the underlying model object is destroyed, all the dependent view objects must be cleaned up so that no stale references exist While shared_ptr and weak_ptr can be used to manage the lifetimes of the underlying model objects and allows for weak references from the view to the model, they don't provide for notification of the destruction of the underlying object (they do in the sense that the use of a stale weak_ptr allows for notification), but I need an approach that notifies the dependent objects that their weak reference is going away. Can anyone suggest a good strategy to manage this?

    Read the article

  • Rendering My Fault Message

    - by onefloridacoder
    My coworkers setup  a nice way to get the fault messages from our service layer all the way back to the client’s service proxy layer.  This is what I needed to work on this afternoon.  Through a series of trials and errors I finally figured this out. The confusion was how I was looking at the exception in the quick watch viewer.  It appeared as though the EventArgs from the service call had somehow magically been cast back to FaultException(), not FaultException<T>.  I drilled into the EventArgs object with the quick watch and I copied this code to figure out where the Fault message was hiding.  Further, when I copied this quick watch code into the IDE I got squigglies.  Poop. 1: ((System 2: .ServiceModel 3: .FaultException<UnhandledExceptionFault>)(((System.Exception)(e.Result)).InnerException)).Detail.FaultMessage I wont bore you with the details but here’s how it turned out.   EventArgs which I’m calling “e” is not the result such as some collection of items you’re expecting.  It’s actually a FaultException, or in my case FaultException<T>.  Below is the calling code and the callback to handle the expected response or the fault from the completed event. 1: public void BeginRetrieveItems(Action<ObservableCollection<Model.Widget>> FindItemsCompleteCallback, Model.WidgetLocation location) 2: { 3: var proxy = new MyServiceContractClient(); 4:  5: proxy.RetrieveWidgetsCompleted += (s, e) => FindWidgetsCompleteCallback(FindWidgetsCompleted(e)); 6:  7: RetrieveWidgetsRequest request = new RetrieveWidgetsRequest { location.Id }; 8:  9: proxy.RetrieveWidgetsAsync(request); 10: } 11:  12: private ObservableCollection<Model.Widget> FindItemsCompleted(RetrieveWidgetsCompletedEventArgs e) 13: { 14: if (e.Error is FaultException<UnhandledExceptionFault>) 15: { 16: var fault = (FaultException<UnhandledExceptionFault>)e.Error; 17: var faultDetailMessage = fault.Detail.FaultMessage; 18:  19: UIMessageControlDuJour.Show(faultDetailMessage); 20: return new ObservableCollection<BinInventoryItemCountInfo>(); 21: } 22:  23: var widgets = new ObservableCollection<Model.Widget>(); 24:  25: if (e.Result.Widgets != null) 26: { 27: e.Result.Widgets.ToList().ForEach(w => widgets.Add(this.WidgetMapper.Map(w))); 28: } 29:  30: return widgets; 31: }

    Read the article

  • Custom Folders in SSMS Object Explorer? Yes, we can!

    - by Luca Zavarella
    When you have a huge objects’ number in SSMS Object Explorer, you often get lost in finding items. So it’d be useful to catalog those objects in folders, in order to follow an application’s logical layer subdivision, for example. There is a fantastic add-in for SSMS that helps us to do that: http://www.sqltreeo.com The developer of this add-in has written a related post in his blog: http://www.sqltreeo.com/wp/dowload-free-ssms-add-in-to-create-own-folder-for-database-objects/ So another useful tool to add to our  SQL Server toolbox

    Read the article

  • If most of team can't follow the architecture, what do you do?

    - by Chris
    Hi all, I'm working on a greenfields project with two other developers. We're all contractors, and myself and one other just started working on the project while the orginal one has been doing most of the basic framework coding. In the past month, my fellow programmer and I have been just frustrated by the design descisions done by our co-worker. Here's a little background information: The application at face value appeared to be your standard n-layered web application using C# on the 3.5 framework. We have a data layer, business layer and a web interface. But as we got deeper into the project we found some very interesting things that have caused us some troubles. There is a custom data access sqlHelper type base which only accepts dictionary key/valued entries and returns only data tables. There are no entity objects, but there are some massive objects which do everything and then are tossed into session for persitance. The general idea is that the pages (.aspx) don't do anything, while the controls (.ascx) do everything. The general flow is that a client clicks on a button, which goes to a user control base which passes a process request to the 'BLL' class which goes to the page processor, which then goes to a getControlProcessor, which at last actually processes the request. The request itself is made up of a dictionary which is passing a string valued method name, stored procedure name, a control name and possibly a value. All switching of the processing is done by comparing the string values of the control names and method names. Pages are linked together via a common header control that uses a combination of javascript and tables to create a hyperlink effect. And as I found out yesterday, a simple hyperlink between one page and another does not work because of the need to have quite a bit of information in session to determine which control to display on a page. My fellow programmer and I both believe that this is a strange and uncommon approach to web application development. Both of us have been in this business for over five years and neither of us have seen this approach. My question is this, how would we approach our co-worker and voice our concerns and what should we do if he does not want to accept the criteic? We both do not want to insult the work that has been done, but feel that going forward will create a nightmare for development. Thanks for your comments.

    Read the article

  • Is the TCP protocol good enough for real-time multiplayer games?

    - by kevin42
    Back in the day, TCP connections over dialup/ISDN/slow broadband resulted in choppy, laggy games because a single dropped packet resulted in a resync. That meant a lot of game developers had to implement their own reliability layer on top of UDP, or they used UDP for messages that could be dropped or received out of order, and used a parallel TCP connection for information that must be reliable. Given the average user has faster network connections now, can a real time game such as an FPS give good performance over a TCP connection?

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • TLS/SSL and .NET Framework 4.0

    The Secure Socket Layer is now essential for the secure exchange of digital data, and is most generally used within the HTTPS protocol. .NET now provides the Windows Communication Foundation (WCF) to implement secure communications directly. Matteo explains the TLS/SSL protocol, and takes a hands-on approach to investigate the SslStream class to show how to implement a secure communication channel

    Read the article

  • Crash when using datablocks

    - by scorcher24
    I have really throughly searched the net and could not find any solution for this so I ask for help here. Anyway, I have this datablock in datablocks.cs: datablock t2dSceneObjectDatablock(EnemyShipConfig) { canSaveDynamicFields = "1"; Layer = "3"; size = "64 64"; CollisionActiveSend = "1"; CollisionActiveReceive = "1"; CollisionCallback = true; CollisionLayers = "3"; CollisionDetectionMode = "POLYGON"; CollisionPolyList = "0.00 -0.791 0.756 0.751 -0.746 0.732"; UsesPhysics = "0"; Rotation = "-90"; WorldLimitMode = "KILL"; WorldLimitMax = "880 360"; WorldLimitMin = "-765 -436"; minFireRate = "2000"; maxFireRate = "1200"; laserSpeed = "800"; minSpeed = "100"; maxSpeed = "150"; }; This is an exact reproduction of an object that I have manually edited in the editor. So far, I just used clone() to get as many enemies as I need, while it was out of sight. It is a r-type style shooter, so I need a variable amount of enemies. Since clone() spams my log, I decided to use datablocks, since it is also more flexible. That's what I get when I use clone(): Con::execute - 0 has no namespace: onRemoveFromScene However, once spawning begins, my game freezes and crashes: function SpawnEnemy() { //%spawnedEnemy = EnemyShipMaster.clone(true); %spawnedEnemy = new t2dStaticSprite() { class = "EnemyShip"; sceneGraph = $global_sceneGraph; datablock = "EnemyShipConfig"; imageMap = "starshipImageMap"; layer = 3; }; %speed = getRandom(%spawnedEnemy.minSpeed, %spawnedEnemy.maxSpeed); %y = getRandom(-320, 320); // Set Properties %spawnedEnemy.setPositionX(700); %spawnedEnemy.setPositionY(%y); %spawnedEnemy.setVisible(true); %spawnedEnemy.setLinearVelocityX( -%speed ); %spawnedEnemy.setTimerOn( getRandom( %spawnedEnemy.maxFireRate, %spawnedEnemy.minFireRate ) ); } // Definition of $global_sceneGraph from game.cs: $global_sceneGraph = sceneWindow2D.loadLevel(%level); As I said, it works fine when I use clone() (which is commented out here), but my log gets spammed. I really hope someone can shed some light for me, this is driving me crazy.

    Read the article

  • Developer career feeling like going back in time every new job [closed]

    - by komediant
    Is there a good category for this question? My background is bachelor in ICT and for a hobby I am programming already since I was around twelve I think. Started with QBasic, Pascal, C, Java et cetera. Currently I am working for about eight/nine years. Half academics/medical and half company world. A few years ago I started with frameworks and I began with Grails (underlying Spring/Hibernate), which was a heavenly job, very productive and no hassle. My previous job I developed in pure Spring/Hibernate Java, which was a bit more writing annotations and XML and no conventions like Grails. But still, I did like Spring/Hibernate a lot and the professional setup with a developmentstreet, versioning, Jenkins/Sonar, log4j and a good IDE like IntellIJ. It felt quite 'clear' and organised, although I knew Grails which felt a bit more productive. But...at my current job almost half the code is pure servlet, hard coded JDBC (connections handled by yourself), scriptlets in all JSP pages, no service layer, no versioning, no Maven, HTML in DAO-layer, JAR-hell, no hot swap deployment locally, every change you have to deploy and hope it works fine on the server. All local development needs ugly scriptlet tags to check which environment it is running. Et cetera. Now and then developers work over in the evening - I don't - and still lots of issues are not solved and new projects are waiting. I hear the developers complaining, but somehow they feel like what they have now is "advanced" or they are in a sort of comfore zone. The lead developer seems open for new things, but half of the times he says he can implement MVC-framework features himself instead of using what is already out there. So in short, I currently feel like I miss all the modern framework techniques and that the company is going so slow forward. I just work here for two months now. What I do now is also code some partially ugly stuff, but it goes in completely into my nature and I feel uncomfortable with it. Coding something takes long(er) than estimated and my manager complains about why it takes so long and I feel ashamed for myself needing so much time. Where I was used to just writing a query I now build up whole try catch methods. My manager knows my complaints and the developers do so too. There will come a meeting to line out plans for 2013 on technology and the issues I and the company are facing. I am not looking for another job yet, it's close to wehre I live and the economy is fragile. Does anyone else have had this kind of career, like feeling going backwards witch technology? And how did you cope with it?

    Read the article

  • LibGDX - Textures rendering at wrong position

    - by ACluelessGuy
    Update 2: Let me further explain my problem since I think that i didn't make it clear enough: The Y-coordinates on the bottom of my screen should be 0. Instead it is the height of my screen. That means the "higher" i touch/click the screen the less my y-coordinate gets. Above that the origin is not inside my screen, atleast not the 0 y-coordinate. Original post: I'm currently developing a tower defence game for fun by using LibGDX. There are places on my map where the player is or is not allowed to put towers on. So I created different ArrayLists holding rectangles representing a tile on my map. (towerPositions) for(int i = 0; i < map.getLayers().getCount(); i++) { curLay = (TiledMapTileLayer) map.getLayers().get(i); //For all Cells of current Layer for(int k = 0; k < curLay.getWidth(); k++) { for(int j = 0; j < curLay.getHeight(); j++) { curCell = curLay.getCell(k, j); //If there is a actual cell if(curCell != null) { tileWidth = curLay.getTileWidth(); tileHeight = curLay.getTileHeight(); xTileKoord = tileWidth*k; yTileKoord = tileHeight*j; switch(curLay.getName()) { //If layer named "TowersAllowed" picked case "TowersAllowed": towerPositions.add(new Rectangle(xTileKoord, yTileKoord, tileWidth, tileHeight)); // ... AND SO ON If the player clicks on a "allowed" field later on he has the opportunity to build a tower of his coice via a menu. Now here is the problem: The towers render, but they render at wrong position. (They appear really random on the map, no certain pattern for me) for(Rectangle curRect : towerPositions) { if(curRect.contains(xCoord, yCoord)) { //Using a certain tower in this example (left the menu out if(gameControl.createTower("towerXY")) { //RenderObject is just a class holding the Texture and x/y coordinates renderList.add(new RenderObject(new Texture(Gdx.files.internal("TowerXY.png")), curRect.x, curRect.y)); } } } Later on i render it: game.batch.begin(); for(int i = 0; i < renderList.size() ; i++) { game.batch.draw(renderList.get(i).myTexture, renderList.get(i).x, renderList.get(i).y); } game.batch.end(); regards

    Read the article

  • How do you edit the "Preferred Format" settings in Rhythmbox?

    - by skyblue
    In Rhythmbox's Preferences, you can change the "Preferred Format" for Music to MPEG Layer 3 Audio, Ogg Vorbis, FLAC, or MPEG 4 Audio. However, despite there being a Settings button, it does not become enabled for any of these choices. (I have installed all of the gstreamer plugins, but this has made no difference.) So how can you change the "Preferred Format", for example to change the bit rate or the quality setting?

    Read the article

  • Are there new flexible texteditors? [closed]

    - by RParadox
    Vi(m) and Emacs are almost 40 years old. Why are they still the standard, and what attempts have been made at coming up with a new flexible editor? Are there any features which can not be built into vim/emacs? My question is similar to this one: Time to drop Emacs and vi? No one had a suggestion, which surprises me. I would have thought that the core of a texteditor is very small and that people brew their own. Perhaps nobody wants to deal with supporting all the modes? Edit to clarify my question: Instead of writing modes and scripts I ask myself, why there is not a much lightweight project, which lets people custom the editor more directly? Vim has 365395 LOC (C lines all included), Emacs 1.5 million LOC. Why is there a project with say 50k LOC, which is the core, why people can use more freely? Perhaps there is such project, I haven't looked very far. I thought about putting together modules from Vim myself and experimenting with some ideas. The core of editing shouldn't be more than 10k? Vim has a lot optimizations which is really an overkill nowadays. Basically I'm looking for a code base for my own editor and Vi/Emacs are I believe not intended to be used in this way. Bill Joy said the following about vi. http://web.cecs.pdx.edu/~kirkenda/joy84.html The fundamental problem with vi is that it doesn't have a mouse and therefore you've got all these commands. In some sense, its backwards from the kind of thing you'd get from a mouse-oriented thing. I think multiple levels of undo would be wonderful, too. But fundamentally, vi is still ed inside. You can't really fool it. Its like one of those pinatas - things that have candy inside but has layer after layer of paper mache on top. It doesn't really have a unified concept. I think if I were going to go back - I wouldn't go back, but start over again.

    Read the article

  • Why Do You Need SSL Certificate

    SSL (Secure Sockets Layer) is an encrypting modus operandi that ensures the en route security of the personal details processed by the browser to the server. We all know that online shopping is prefe... [Author: Jack Melde - Computers and Internet - May 01, 2010]

    Read the article

  • What interface does python use to implement sockets?

    - by user2738698
    When I programmed in python, I believe I interfaced with the transport layer using sockets. If python was programmed by humans, they must have used an interface that was "lower" than sockets, to provide us with the interface to sockets. I assume firewalls, also programmed by humans, use interfaces of lower layers in the same manner, so is there a way to access such lower layers, in terms of programming?

    Read the article

  • Are there any limitations to using WinRT instead of .Net?

    - by jerrykobes
    From my understanding creating an application that runs on multiple architectures requires virtualization, and virtualization reduces performance since it creates a layer of abstraction. With Windows 8 supporting both Intel and ARM architectures should we expect slower performance with a WinRT app versus a .Net app running on an Intel device? Also, will WinRT support database connectivity and active directory access?

    Read the article

  • Single API Architecture

    - by user1901686
    When people refer to an architecture that involves a single service API that all clients talk to (a client can be an iPad app, etc), what is the "client" for the web app -- is it A) the web browser itself. Thus, the entire app is written in html/css/javascript and ajax calls to the service are made to fetch data and changes are made through javascript or B) you have an MVC-like stack on a server, only instead of the controllers calling to the model layer directly, they call to the service API which return models that are used to render the traditional views or C) something else?

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >