Search Results

Search found 9124 results on 365 pages for 'big sal'.

Page 185/365 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Library Organization in .NET

    - by Greg Ros
    I've written a .NET bitwise operations library as part of my projects (stuff ranging from get MSB set to some more complicated bitwise transformations) and I mean to release it as free software. I'm a bit confused about a design aspect of the library, though. Many of the methods/transformations in the library come with different endianness. A simple example is a getBitAt method that regards index 0 as the least significant bit, or the most significant bit, depending on the version used. In practice, I've found that using separate functions for different endianness results in much more comprehensible and reusable code than assuming all operations are little-endian or something. I'm really stumped regarding how best to package the library. Should I have methods that have LE and BE versions take an enum parameter in their signature, e.g. Endianness.Little, Endianness.Big? Should I have different static classes with identically named methods? such as MSB.GetBit and LSB.GetBit On a much wider note, is there a standard I could use in cases like this? Some guide? Is my design issue trivial? I have a perfectionist bent, and I sometimes get stuck on tricky design issues like this... Note: I've sort of realized I'm using endianness somewhat colloquially to refer to the order/place value of digital component parts (be they bits, bytes, or words) in a larger whole, in any setting. I'm not talking about machine-level endianness or serial transmission endianness. Just about place-value semantics in general. So there isn't a context of targeting different machines/transmission techniques or something.

    Read the article

  • Bounding volume hierarchy - linked nodes (linear model)

    - by teodron
    The scenario A chain of points: (Pi)i=0,N where Pi is linked to its direct neighbours (Pi-1 and Pi+1). The goal: perform efficient collision detection between any two, non-adjacent links: (PiPi+1) vs. (PjPj+1). The question: it's highly recommended in all works treating this subject of collision detection to use a broad phase and to implement it via a bounding volume hierarchy. For a chain made out of Pi nodes, it can look like this: I imagine the big blue sphere to contain all links, the green half of them, the reds a quarter and so on (the picture is not accurate, but it's there to help understand the question). What I do not understand is: How can such a hierarchy speed up computations between segments collision pairs if one has to update it for a deformable linear object such as a chain/wire/etc. each frame? More clearly, what is the actual principle of collision detection broad phases in this particular case/ how can it work when the actual computation of bounding spheres is in itself a time consuming task and has to be done (since the geometry changes) in each frame update? I think I am missing a key point - if we look at the picture where the chain is in a spiral pose, we see that most spheres are already contained within half of others or do intersect them.. it's odd if this is the way it should work.

    Read the article

  • Most effective work habit for coding? [on hold]

    - by Cris
    Working on a big solo project (~15,000 LOC), I am encountering the following phenomenon: I seem to work best when I program in short bursts of 10-15 minutes. Right now I am working on a section which is a complete first time for me architecturally and if I have any architectural issues that emerge when doing the implementation, I seem to be able to best serve these by taking a total break. Then, later, sketching out the ideas on some paper. And when I feel I have sufficient clarity, then going back to code. This iterates until that architectural issue for that section is resolved. This seems quite counter intuitive: that I can progress more quickly by coding less, and taking more breaks. I am nearing the end of the sections which are "first times" for me, and about to dive into stuff which I am much more familiar and am wondering if this counter intuitive efficiency will continue. So my question is: even for regular coding of sections one is familiar with, which don't require constant re-clarification of the best architecture, is more progress to be attained by taking more breaks and coding in bursts?

    Read the article

  • Engineered Systems: Oracle schlägt drei Fliegen mit einer Klappe

    - by A&C Redaktion
    Die News aus dem Partnergeschäft von Oracle sorgen für Schlagzeilen im Magazin ChannelPartner. Über den neuen Fokus auf Engineered Systems und die SMB Appliances heißt es dort, so könne Oracle „drei Fliegen mit einer Klappe schlagen“: Erstens wird früheren Sun Hardware-Resellern der Einstieg ins Software-Geschäft erleichtert, zweitens bieten die Appliances neue Möglichkeiten für den Mittelstand und drittens bekräftigt die Strategie das zweistufige Channel-Modell. Dazu Silvia Kaske, Senior Director Channel Sales & Alliances Oracle Deutschland: "Wir stärken weltweit den Channel, weil das SMB-Geschäft zunehmend anzieht." Neben der durchaus positiven Wertung der Channel-Strategie bietet der Artikel einen anschaulichen Überblich darüber, was Engineered Systems eigentlich sind. Außerdem werden die Einsatzmöglichkeiten (Big Data, Mobile Computing, Cloud etc.) und Angebote von Oracle in diesem Bereich dargestellt und diskutiert. Das Highlight hierbei ist – wen wundert’s – die Oracle Database Appliance. Mit dem Portfolio wächst natürlich auch die Zahl der Spezialisierungen. Logisch, findet Silvia Kaske: "Endkunden erwarten keine Generalisten, sondern Spezialisten. Nur mit einem klaren Fokus wird der Partner erfolgreich sein". Hier geht’s zum vollständigen CP-Artikel unter dem Titel „Oracle lockt Channel mit SMB-Appliances“.

    Read the article

  • What scenarios are implementations of Object Management Group (OMG) Data Distribution Service best suited for?

    - by mindcrime
    I've always been a big fan of asynchronous messaging and pub/sub implementations, but coming from a Java background, I'm most familiar with using JMS based messaging systems, such as JBoss MQ, HornetQ, ActiveMQ, OpenMQ, etc. I've also loosely followed the discussion of AMQP. But I recently became aware of the Data Distribution Service Specification from the Object Management Group, and found there are a couple of open-source implementations: OpenSplice OpenDDS It sounds like this stuff is focused on the kind of high-volume scenarios one tends to associate with financial trading exchanges and what-not. My current interest is more along the lines of notifications related to activity stream processing (think Twitter / Facebook) and am wondering if the DDS servers are worth looking into further. Could anyone who has practical experience with this technology, and/or a deep understanding of it, comment on how useful it is, and what scenarios it is best suited for? How does it stack up against more "traditional" JMS servers, and/or AMQP (or even STOMP or OpenWire, etc?) Edit: FWIW, I found some information at this StackOverflow thread. Not a complete answer, but anybody else finding this question might also find that thread useful, hence the added link.

    Read the article

  • What's cool about Lisp nowadays? [closed]

    - by Kos
    Possible Duplicates: Why is Lisp useful? Is LISP still useful in today's world? Which version is most used? First of all, let me clarify: I'm aware of Lisp's place in history, as well as in education. I'm asking about its place in practical application, as of 2011. The question is: What features of Lisp make it the preferred choice for projects today? It's widely used in various AI areas as far as I know, and probably also elsewhere. I can imagine projects choosing, for instance... Python because of its concise, readable syntax and it being dynamic, Haskell for being pure functional with a powerful type system, Matlab/Octave for the focus on numerics and big standard libraries, Etc. When should I consider Lisp the proper language for a given problem? What language features make it the preferred choice then? Is its "purity and generality" an advantage which makes it a better choice for some subset of projects than the modern languages? edit- On your demand, a little rephrase (or simply a tl;dr) to make this more specific: a) What problems are solvable with Lisp much more easily than with more common, modern languages like Python or C# (or even F# or Scala)? b) What language features specific for Lisp make it the best choice for those problems?

    Read the article

  • Does an inexperienced programmer need an IDE?

    - by Torben Gundtofte-Bruun
    Reading this other question makes me wonder if I (as an absolute beginner PHP programmer) should stick with WAMP and Notepad++ or to switch to some IDE like Eclipse. It's understandable that skilled developers will benefit from a big shiny IDE. But why should an absolute beginner use an IDE? Do the benefits outweigh the extra challenge of learning the IDE on top of learning to develop? Update for clarification: My goal is to get some basic programming experience. By choosing PHP and WAMP (and FogBugz and Kiln) I hope to avoid having to navigate the tricky / messy OS specifics and compiling etc. and just focus on basic functionality like an online user registration form. I've got lots of theoretical understanding from university a decade ago but no practical experience. I want to remedy that with a hobby project that would be similar to a real-world sellable web app. There are so many questions to ask. So many pitfalls I probably have to blunder into. This question is just one piece (my first!) of that puzzle.

    Read the article

  • Resize a pendrive Linux?

    - by user11239
    I'm running Ubuntu from USB media, which has a drive capacity of 250 GB, all existing as one FAT32 partition. However, when I created the bootable Ubuntu drive, only 4.79 GB were allocated for usage. Rather than put files directly into the /cdrom where the drive is mounted, I want to expand what is listed here in aufs to be at least 200 GB. I'm hopeful that I can do this live. Output of df : Filesystem 1K-blocks Used Available Use% Mounted on aufs 4051904 4050108 0 100% / none 1542852 284 1542568 1% /dev /dev/sdb1 244076800 4901648 239175152 3% /cdrom /dev/loop0 688000 688000 0 100% /rofs none 1547840 1496 1546344 1% /dev/shm tmpfs 1547840 4828 1543012 1% /tmp none 1547840 80 1547760 1% /var/run none 1547840 0 1547840 0% /var/lock none 1547840 0 1547840 0% /lib/init/rw Output of fdisk -l : Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00083fe4 Device Boot Start End Blocks Id System /dev/sdb1 * 1 30401 244196001 c W95 FAT32 (LBA) So basically what I want to do is get /dev/sdb1 to be entirely, or almost entirely read as aufs. I'm confused over how to do this, as the file systems are all part of /dev/sdb1 on one big partition, rather than separate partitions for separate file systems.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • What should a developer know before building a public web site?

    - by Joel Coehoorn
    What things should a programmer implementing the technical details of a web site address before making the site public? If Jeff Atwood can forget about HttpOnly cookies, sitemaps, and cross-site request forgeries all in the same site, what important thing could I be forgetting as well? I'm thinking about this from a web developer's perspective, such that someone else is creating the actual design and content for the site. So while usability and content may be more important than the platform, you the programmer have little say in that. What you do need to worry about is that your implementation of the platform is stable, performs well, is secure, and meets any other business goals (like not cost too much, take too long to build, and rank as well with Google as the content supports). Think of this from the perspective of a developer who's done some work for intranet-type applications in a fairly trusted environment, and is about to have his first shot and putting out a potentially popular site for the entire big bad world wide web. Also: I'm looking for something more specific than just a vague "web standards" response. I mean, HTML, JavaScript, and CSS over HTTP are pretty much a given, especially when I've already specified that you're a professional web developer. So going beyond that, Which standards? In what circumstances, and why? Provide a link to the standard's specification. This question is community wiki, so please feel free to edit that answer to add links to good articles that will help explain or teach each particular point. To search in only the answers from this question, use the inquestion:this option.

    Read the article

  • Remote Working & Relocation

    - by James Burgess
    Sorry if this question is a duplicate, I did some extensive searching and found nothing on quite the same topic (though a couple on partially-overlapping topics). Recently, whilst on holiday in Munich, Germany, I was taken aback by the sheer number of programming-related posts available in the city that I easily qualify for (both in terms of knowledge, and experience). The advertised working environments seemed good and the pay seemed to be at least as good as what I'd expect here in the UK. Probably 80% of the advertisements I saw on the underground were for IT-related jobs, and a good 60% of those I was easily qualified for. At the moment, I work as a freelancer mostly on web and small software projects, but seeing the vast availability of jobs in Munich versus my local area has me thinking about remote working. I'm unable to relocate for a job for the next 3 years (my wife has a contract to continue being a doctor at her current hospital for that time) but would almost certainly be open to it after that (after all, my wife and I both love Munich). In the meanwhile, I would be very interested in remote-working. So, my question is thus do companies ever take on remote workers (even with semi-frequent trips to the office) from abroad, with a view to later relocation? And, if so, how do you go about broaching the topic with a recruiter when getting in contact about a job posting? Language isn't a barrier for me, here, as 90% of the jobs I've looked up in Munich don't require German speakers (seems they have a big recruiting market abroad). I'm also under no illusions about the disadvantages of remote working, but I'm more interested in the viability of the scenario rather than the intricacies (at least at this point). I'd really appreciate any contributions, especially from those who have experience with working in such a scenario!

    Read the article

  • Is there such thing as a "theory of system integration"?

    - by Jeff
    There is a plethora of different programs, servers, and in general technologies in use in organizations today. We, programmers, have lots of different tools at our disposal to help solve various different data, and communication challenges in an organization. Does anyone know if anyone has done an serious thinking about how systems are integrated? Let me give an example: Hypothetically, let's say I own a company that makes specialized suits a'la Iron Man. In the area of production, I have CAD tools, machining tools, payroll, project management, and asset management tools to name a few. I also have nice design space, where designers show off their designs on big displays, some touch, some traditional. Oh, and I also have one of these new fangled LEED Platinum buildings and it has number of different computer controlled systems, like smart window shutters that close when people are in the room, a HVAC system that adjusts depending on the number of people in the building, etc. What I want to know is if anyone has done any scientific work on trying to figure out how to hook all these pieces together, so that say my access control system is hooked to my payroll system, and my phone system allowing my never to swipe a time card, and to have my phone follow me throughout the building. This problem is also more than a technology challenge. Every technology implementation enables certain human behaviours, so the human must also be considered as a part of the system. Has anyone done any work in how effectively weave these components together? FYI: I am not trying to build a system. I want to know if anyone has thoroughly studied the process of doing a large integration project, how they develop their requirements, how they studied the human behaviors, etc.

    Read the article

  • How to show or direct a business analyst to do data modelling?

    - by AaronLS
    Our business analysts pushed hard to collect data through a spreadsheet. I am the programmer responsible for importing that data. Usually when they push hard for something like this, I never know how well it will work out until a few weeks later when I have time assigned to work on the task of programming the import of the data. I have tried to do as much as possible along the way, named ranges, data validations, etc. But I usually don't have time to take a detailed look at all the data and compare to the destination in the database to determine how well it matches up. A lot of times there will be maybe a little table of items that somehow I have to relate to something else in the database, but there are not natural or business keys present that would allow me to do so. Make the best of this, trying to write something that can compare strings and make a best guess at it and then go through the effort of creating interfaces for a user to match the imported data to the destination. I feel like if the business analyst was actually creating a data model, they would be forced to think about these relationships, and have an appreciation for the need of natural or business keys to be part of the spreadsheet for the purposes of smoothly importing the data. The closest they come to business analysis is a big flat list of fields, and that would be fine if it were like any other data dictionary and include data types+relationships, but it isn't. They are just a bunch of names. No indication of what type of data they might hold, and it is up to me to guess. When I have pushed for more detail, they say that it is just busy work. How can I explain the importance of data modelling? How can I tell them what it is and how to do it? It feels impossible, because they don't have an appreciation for its importance. They do however, usually have an interest in helping out in whatever way they can, it's just this in particular has never gotten a motivated response.

    Read the article

  • Scene graphs and spatial partitioning structures: What do you really need?

    - by tapirath
    I've been fiddling with 2D games for awhile and I'm trying to go into 3D game development. I thought I should get my basics right first. From what I read scene graphs hold your game objects/entities and their relation to each other like 'a tire' would be the child of 'a vehicle'. It's mainly used for frustum/occlusion culling and minimizing the collision checks between the objects. Spatial partitioning structures on the other hand are used to divide a big game object (like the map) to smaller parts so that you can gain performance by only drawing the relevant polygons and again minimizing the collision checks to those polygons only. Also a spatial partitioning data structure can be used as a node in a scene graph. But... I've been reading about both subjects and I've seen a lot of "scene graphs are useless" and "BSP performance gain is irrelevant with modern hardware" kind of articles. Also some of the game engines I've checked like gameplay3d and jmonkeyengine are only using a scene graph (That also may be because they don't want to limit the developers). Whereas games like Quake and Half-Life only use spatial partitioning. I'm aware that the usage of these structures very much depend on the type of the game you're developing so for the sake of clarity let's assume the game is a FPS like Counter-Strike with some better outdoor environment capabilities (like a terrain). The obvious question is which one is needed and why (considering the modern hardware capabilities). Thank you.

    Read the article

  • Social-network, online community, company and job reviews, salaries statistics and much more.. Do we have it? Do we need it?

    - by Vlad Lazarenko
    I have many friends from Ukraine who are programmers. So I found out that they have a web site that collects, organizes and analyzing information about IT companies, which includes location, feedbacks, company reviews from current and former employees etc. They also collect programming salaries and organize them by language, region etc. That web site is ran by programmers and for programmers, all information is absolutely public and free. Plus, web site has forums, and people can discuss (more or less social than specific programming stuff) things, publish articles, news etc. I personally think that is useful, especially for those who are new in this industry. For example, you may do a small research and find out that, for example, Java programmers getting paid more than PHP programmers but demand is lower. Or you get an offer from the company, is about to accept it, but read reviews and find out that they don't even provide internet access at work and if you need to download something, you have to ask your manager to do it for you, and managers share a single computer that has internet connection to get that stuff for you (there is only one such company in Kiev, Ukraine, called SMK, for Software Mac Kiev, a big shame). So the question is - do we have something like it in US? Or at least, say, for New York region? Or state? All information I managed to find online is inaccurate or not full. Forums are very specific. If we don't have it, would you be interested in creating such a portal? Thanks!

    Read the article

  • Compressing 2D level data

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • Is there a secure way to add a database troubleshooting page to an application?

    - by Josh Yeager
    My team makes a product (business management software) that our customers install on their own servers. The product uses a SQL database for data storage and app configuration. There have been quite a few cases where something strange happened in the customer's database (caused by bugs in our app and also sometimes admins who mess with the database). To figure out what is wrong with the data, we have to send SQL scripts to the customer and tell them how to run them on the database server. Then, once we know how to fix it, we have to send another script to repair the data. Is there a secure way to add a page in our application that allows an application admin to enter SQL scripts that read and write directly to the database? Our support team could use that to help customers run these scripts, without needing direct access to the SQL server. My big concerns are that someone might abuse this power to get data they shouldn't have and maybe to erase or modify data that they shouldn't be able to modify. I'm not worried about system admins, because they could find another way to do the same thing. But what if someone else got access to the form? Is there any way to do this kind of thing securely?

    Read the article

  • A new tool in beta: Conflict Alert

    - by Alex Davies
    You know that manual merges are a real pain? Well, I’ve just released a Visual Studio extension that makes manual merges a thing of the past. No source control system can automatically merge two edits to the same line of code. Conflict Alert solves this by warning you that you are heading down a path that will cause a manual merge later down the line. You choose whether you want to carry on, or talk to your teammate and find out what they are doing. Have you ever warned your teammates that you are doing a big refactor, and that they should ‘keep out of class X’? Conflict Alert tells them for you automatically by highlighting the sections of code that you have edited.   It doesn’t need to connect to your source control system, so it works no matter which you use. Its a first release, and I hope it is useful. Any feedback would be gratefully received. Grab a teammate and try it now.

    Read the article

  • Does F# kill C++?

    - by MarkPearl
    Okay, so the title may be a little misleading… but I am currently travelling and so have had very little time and access to resources to do much fsharping – this has meant that I am right now missing my favourite new language. I was interested to see this post on Stack Overflow this evening concerning the performance of the F# language. The person posing the question asked 8 key points about the F# language, namely… How well does it do floating-point? Does it allow vector instructions How friendly is it towards optimizing compilers? How big a memory foot print does it have? Does it allow fine-grained control over memory locality? Does it have capacity for distributed memory processors, for example Cray? What features does it have that may be of interest to computational science where heavy number processing is involved? Are there actual scientific computing implementations that use it? Now, I don’t have much time to look into a decent response and to be honest I don’t know half of the answers to what he is asking, but it was interesting to see what was put up as an answer so far and would be interesting to get other peoples feedback on these questions if they know of anything other than what has been covered in the answer section already.

    Read the article

  • When running a .jar application with OpenJDK, my keyboard becomes unresponsive?

    - by Mochan
    I recently downloaded a Java application with the .jar format, and had it running on my computer not so long ago. Now that I'm using my desktop instead of laptop temporarily, I want it to run. On my laptop it was a tremendous hassle to get OpenJDK to even run the application without it going black, and now on my desktop I don't have that problem. However, when I run the application, my keyboard becomes unresponsive and doesn't type at all. This is a really big problem because it's demands the use of a keyboard. It works as normal on my laptop though, and it works perfectly. But now on the desktop its completely useless. I don't know if there's like a keyboard driver I'm missing, but there shouldn't be because the keyboard runs flawlessly everywhere else. I'm using OpenJDK 6 because the 7 has the same 'black screen' I mentioned, so I need this to work within OpenJDK 6. Thanks so much in advance and I'll try to specify as many details as I can. M

    Read the article

  • Encrypt SSD or not?

    - by JamesBradbury
    My desktop machine is running Ubuntu 12.04 (and will probably stay with it until the next LTS). I've got a new 120GB SSD on the way as my existing 420GB spinning disk. If it makes any difference I'll be dual-booting with Windows 7 across both disks too. I've read some helpful answers here about /home setup and enabling TRIM, which I intend to follow. So most of my /home will be on the SSD, with only photos, videos and music on the spinning disk. The question is, when I reinstall Ubuntu from CD or USB, whether I should encrypt the SSD? Specifically: I'm reading that drive wear isn't much of an issue with modern SSDs as they last decades even if you spam them. Is this true? How big a performance reduction will encrypting cause (I have an i7 Sandybridge, so I guess it can cope)? Is it more important from a security point of view to encrypt an SSD? I think I read somewhere that it may be hard to reliably wipe data. By all means answer even if you only know about one of those things.

    Read the article

  • Reinventing the Wheel, why should I?

    - by Mercfh
    So I have this problem, it may be my OCD (i have OCD it's not severe.....but It makes me very..lets say specific about certain things, programming being one of them) or it may be the fact that I graduated college and still feel "meh" at programming. Reading This made me think "OH thats me!" but thats not really my main problem. My big problem is....anytime im using a high level language/API/etc. I always think to myself that im not really "programming". I know I know...it sounds stupid. But Like I feel like....if i can't figure out how to do it at the lowest level then Im not really "understanding" it. I do this for just about every new technology I learn. I look at the lowest level and try to understand it. Sometimes I do.....most of the time I don't, I mean i've only really been programming for 4 years (at college, if you even call it programming.....our university's program was "meh"). For instance I do a little bit of embedded programming (with the Atmel AVR 8bits/Arduino stuff). And I can't bring myself to use the C compiler, even though it's 8 million times easier than using assembly......it's stupid I know... Anyone else feel like this, I think it's just my OCD that makes me feel this way....but has anyone else ever felt like they need to go down to the lowest level of the language to even be satisfied with using it? I apologize for the very very odd question, but I think it really hinders me in getting deep seeded into a programming language and making a real application of my own. (it's silly I know)

    Read the article

  • Are web application usability issues equal to website usability issues?

    - by Kor
    I've been reading two books about web usability issues and tests (Rocket Surgery Made Easy¹ and Prioritizing Web Usability²) and they claim some strategies and typical problems about website usability and how to lead them. However, I want to do a web application, and I think I lost track of what I am trying to solve. These two books claim to work with raw websites (e-commerce, business sites, even intranet), but I'm not sure if everything about web usability is applicable to web application usability. They sure talk about always having available (and usable) the Back button, to focus on short information rather than big amounts of text, etc., but they could be inaccurate in deeper problems that may be easier (or just skippable) in regular websites. Has anybody some experience in this field and could tell me if both web applications and websites share their usability issues? Thanks in advance Edit: Quoting Wikipedia, a website is a collection of related web pages containing images, videos or other digital assets, and a web application is an application that is accessed over a network such as the Internet or an intranet. To sum up, both shows/lets you search/produce information but websites are "simple" in interaction and keep the classics of websites (one-click actions) and the other one is closer to desktop applications in the meaning of their uses and ways of interaction (double click, modal windows, asynchronous calls [to keep you in the same "environment" instead of reloading it] etc.). I don't know if this clarifies the difference. Edit 2: Quoting @Victor and myself, a website is anything running in your browser, but a web application is somewhat running in your browser that could be running in your desktop, with similar behaviors and features. Gmail is a web application that could replace Outlook. GDocs could replace Office. Grooveshark could replace your music player, etc.

    Read the article

  • Ubuntu 12.10: Installing proprietary Nvidia driver causes freeze at boot

    - by Greg
    Ok, so I just installed Ubuntu on my laptop, and I immediately encountered an issue: the HDMI audio output won't work. Yes, I know about the sound settings thing where you have to select the HDMI option, but even when it's selected I get no sound out of the TV I'm hooking it up to. This is a dealbreaker for me, because my laptop speakers are terrible, it's one of the big reasons I use my TV monitor. So I decided to work on solving the problem by upgrading my Nvidia drivers. I switched to one of the propriety drivers offered in that software updating utility that comes with the OS, the one option that said (tested). Viola, sound over the HDMI is now working. Unfortunately, this now brings me to my next problem: when I reboot Ubuntu with this or any other proprietary driver installed, it freezes when it tries to load my desktop. As in I can see my wallpaper, but no icons or options of any kind. The system is totally frozen, and gives me one of those "we've experienced an error, do you want to report it messages." So there's my bind. I need HDMI audio out, that's a total dealbreaker for me, but installing the drivers that give me that capability crash the system. Does anyone have any idea what's causing this

    Read the article

  • Best practices: Ajax and server side scripting with stored procedures

    - by Luka Milani
    I need to rebuild an old huge website and probably to port everyting to ASP.NET and jQuery and I would like to ask for some suggestion and tips. Actually the website uses: Ajax (client site with prototype.js) ASP (vb script server side) SQL Server 2005 IIS 7 as web server This website uses hundred of stored procedures and the requests are made by an ajax call and only 1 ASP page that contain an huge select case Shortly an example: JAVASCRIPT + PROTOTYPE: var data = { action: 'NEWS', callback: 'doNews', param1: $('text_example').value, ......: ..........}; AjaxGet(data); // perform a call using another function + prototype SERVER SIDE ASP: <% ...... select case request("Action") case "NEWS" With cmmDB .ActiveConnection = Conn .CommandText = "sp_NEWS_TO_CALL_for_example" .CommandType = adCmdStoredProc Set par0DB = .CreateParameter("Param1", adVarchar, adParamInput,6) Set par1DB = .CreateParameter(".....", adInteger, adParamInput) ' ........ ' can be more parameters .Parameters.Append par0DB .Parameters.Append par1DB par0DB.Value = request("Param1") par1DB.Value = request(".....") set rs=cmmDB.execute RecodsetToJSON rs, jsa ' create JSON response using a sub End With .... %> So as you can see I have an ASP page that has a lot of CASE and this page answers to all the ajax request in the site. My question are: Instead of having many CASES is it possible to create dynamic vb code that parses the ajax request and creates dynamically the call to the desired SP (also implementing the parameters passed by JS)? What is the best approach to handle situations like this, by using the advantages of .Net + protoype or jQuery? How the big sites handle situation like this? Do they do it by creating 1 page for request? Thanks in advance for suggestion, direction and tips.

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >