Search Results

Search found 14801 results on 593 pages for 'base de datos'.

Page 397/593 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Can too much abstraction be bad?

    - by m3th0dman
    As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions.

    Read the article

  • OBIEE 11.1.1.6.5 Bundle Patch released Oct 2012

    - by user554629
    October  2012 OBIEE 11.1.1.6.5 Bundle Patch released Bundle patches are collection of controlled, well tested critical bug fixes for a specific product  which may include security contents and occasionally minor enhancements. These are cumulative in nature meaning the latest bundle patch in a particular series would include the contents of the previous bundle patches released.  A suite bundle patch is an aggregation of multiple product  bundle patches that are part of a product suite. For OBIEE on 11.1.1.6.0, we plan to run a monthly bundle patch cadence. 11.1.1.6.5 bundle patch- available for download from  My Oracle Support . - is cumulative, so it includes everything from previous updates- available for supported platforms ( Windows, Linux, Solaris, AIX, HPUX-IA ) Navigate to https://support.oracle.com and login- Knowledge Base tab  Select a product line [ Business Intelligence ]  Select a Task [ Patching and Maintenance ]  Click Search- Oct 23, 2012, OBIEE 11g: Required and Recommended Patches and Patch Sets, ID 1488475.1- 11.1.1.6.5 Published 19th October 2012 Note: The 11.1.1.6 versions on top of 11.1.1.6.0 are not upgrades, they are opatch fixes.  This is not an upgrade process like from OBIEE 10g to 11g, or from OBIEE 11.1.1.5 to 11.1.1.6.  It is much safer than applying any one-off fixes, which are not regression tested.  You will be more successful using 11.1.1.6.5.  

    Read the article

  • Multiple vulnerabilities in Firefox

    - by Ritwik Ghoshal
    CVE DescriptionCVSSv2 Base ScoreComponentProduct and Resolution CVE-2012-3982 Denial of service (DoS) vulnerability 10.0 Firefox Solaris 10 SPARC: 145080-13 X86: 145081-12 CVE-2012-3983 Denial of service (DoS) vulnerability 10.0 CVE-2012-3986 Permissions, Privileges, and Access Controls vulnerability 6.4 CVE-2012-3988 Resource Management Errors vulnerability 9.3 CVE-2012-3990 Resource Management Errors vulnerability 10.0 CVE-2012-3991 Permissions, Privileges, and Access Controls vulnerability 9.3 CVE-2012-3992 Permissions, Privileges, and Access Controls vulnerability 5.8 CVE-2012-3993 Design Error vulnerability 9.3 CVE-2012-3994 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-3995 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4179 Resource Management Errors vulnerability 10.0 CVE-2012-4180 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4181 Resource Management Errors vulnerability 10.0 CVE-2012-4182 Resource Management Errors vulnerability 10.0 CVE-2012-4183 Resource Management Errors vulnerability 10.0 CVE-2012-4184 Permissions, Privileges, and Access Controls vulnerability 9.3 CVE-2012-4185 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4186 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4187 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4188 Improper Restriction of Operations within the Bounds of a Memory Buffer vulnerability 10.0 CVE-2012-4192 Permissions, Privileges, and Access Controls vulnerability 4.3 CVE-2012-4193 Design Error vulnerability 9.3 CVE-2012-4194 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability 4.3 CVE-2012-4195 Permissions, Privileges, and Access Controls vulnerability 5.1 CVE-2012-4196 Permissions, Privileges, and Access Controls vulnerability 5.0 This notification describes vulnerabilities fixed in third-party components that are included in Oracle's product distributions.Information about vulnerabilities affecting Oracle products can be found on Oracle Critical Patch Updates and Security Alerts page. Note: Solaris 10 patches SPARC: 145080-13 X86: 145081-12 contain the fix for all CVEs between Firefox version 10.0.7 and 10.0.12.

    Read the article

  • What are some practical uses of the "new" modifier in C# with respect to hiding?

    - by Joel Etherton
    A co-worker and I were looking at the behavior of the new keyword in C# as it applies to the concept of hiding. From the documentation: Use the new modifier to explicitly hide a member inherited from a base class. To hide an inherited member, declare it in the derived class using the same name, and modify it with the new modifier. We've read the documentation, and we understand what it basically does and how it does it. What we couldn't really get a handle on is why you would need to do it in the first place. The modifier has been there since 2003, and we've both been working with .Net for longer than that and it's never come up. When would this behavior be necessary in a practical sense (e.g.: as applied to a business case)? Is this a feature that has outlived its usefulness or is what it does simply uncommon enough in what we do (specifically we do web forms and MVC applications and some small factor WinForms and WPF)? In trying this keyword out and playing with it we found some behaviors that it allows that seem a little hazardous if misused. This sounds a little open-ended, but we're looking for a specific use case that can be applied to a business application that finds this particular tool useful.

    Read the article

  • Am I programming too slow?

    - by Jonn
    I've only been a year in the industry and I've had some problems making estimates for specific tasks. Before you close this, yes, I've already read this: http://programmers.stackexchange.com/questions/648/how-to-respond-when-you-are-asked-for-an-estimate and that's about the same problem I'm having. But I'm looking for a more specific gauge of experiences, something quantifiable or probably other programmer's average performances which I should aim for and base my estimates. The answers range from weeks, and I was looking more for an answer on the level of a task assigned for a day or so. (Note that this doesn't include submitting for QA or documentations, just the actual development time from writing tests if I used TDD, to making the page, before having it submitted to testing) My current rate right now is as follows (on ASP.NET webforms): Right now, I'm able to develop a simple data entry page with a grid listing (no complex logic, just Creating and Reading) on an already built architecture, given one full day's (8 hours) time. Adding complex functionality, and Update and Delete pages add another full day to the task. If I have to start the page from scratch (no solution, no existing website) it takes me another full day. (Not always) but if I encounter something new or haven't done yet it takes me another full day. Whenever I make an estimate that's longer than the expected I feel that others think that I'm lagging a lot behind everyone else. I'm just concerned as there have been expectations that when it's just one page it should take me no more than a full day. Yes, there definitely is more room for improvement. There always is. I have a lot to learn. But I would like to know if my current rate is way too slow, just average, or average for someone no longer than a year in the industry.

    Read the article

  • Visually and audibly unambiguous subset of the Latin alphabet?

    - by elliot42
    Imagine you give someone a card with the code "5SBDO0" on it. In some fonts, the letter "S" is difficult to visually distinguish from the number five, (as with number zero and letter "O"). Reading the code out loud, it might be difficult to distinguish "B" from "D", necessitating saying "B as in boy," "D as in dog," or using a "phonetic alphabet" instead. What's the biggest subset of letters and numbers that will, in most cases, both look unambiguous visually and sound unambiguous when read aloud? Background: We want to generate a short string that can encode as many values as possible while still being easy to communicate. Imagine you have a 6-character string, "123456". In base 10 this can encode 10^6 values. In hex "1B23DF" you can encode 16^6 values in the same number of characters, but this can sound ambiguous when read aloud. ("B" vs. "D") Likewise for any string of N characters, you get (size of alphabet)^N values. The string is limited to a length of about six characters, due to wanting to fit easily within the capacity of human working memory capacity. Thus to find the max number of values we can encode, we need to find that largest unambiguous set of letters/numbers. There's no reason we can't consider the letters G-Z, and some common punctuation, but I don't want to have to go manually pairwise compare "does G sound like A?", "does G sound like B?", "does G sound like C" myself. As we know this would be O(n^2) linguistic work to do =)...

    Read the article

  • Wireless connected but internet not working, modem is fine because I'm typing over wifi on my android

    - by Sandro Livaja
    I can't connect to the Internet in 11.10. I tried switching my wifi card to another USB slot but nothing changed. Is it possible that the card is able to connect while not being able to receive packages? $ ifconfig eth0 Link encap:Ethernet HWaddr 40:61:86:35:11:b0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:43 Base address:0x6000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) wlan0 Link encap:Ethernet HWaddr 74:ea:3a:8c:d5:5c inet addr:192.168.2.5 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::76ea:3aff:fe8c:d55c/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12 errors:0 dropped:0 overruns:0 frame:0 TX packets:102 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1209 (1.2 KB) TX bytes:15329 (15.3 KB)

    Read the article

  • Server-side Architecture for Online Game

    - by Draiken
    basically I have a game client that has communicate with a server for almost every action it takes, the game is in Java (using LWJGL) and right now I will start making the server. The base of the game is normally one client communicating with the server alone, but I will require later on for several clients to work together for some functionalities. I've already read how authentication server should be sepparated and I intend on doing it. The problem is I am completely inexperienced in this kind of server-side programming, all I've ever programmed were JSF web applications. I imagine I'll do socket connections for pretty much every game communication since HTML is very slow, but I still don't really know where to start on my server. I would appreciate reading material or guidelines on where to start, what architecture should the game server have and maybe some suggestions on frameworks that could help me getting the client-server communication. I've looked into JNAG but I have no experience with this kind of thing, so I can't really tell if it is a solid and good messaging layer. Any help is appreciated... Thanks ! EDIT: Just a little more information about the game. It is intended to be a very complex game with several functionalities, making some functionalities a "program" inside the program. It is not an usual game, like FPS or RPG but I intend on having a lot of users using these many different "programs" inside the game. If I wasn't clear enough, I'd really appreciate people that have already developed games with java client/server architecture, how they communicated, any frameworks, apis, messaging systems, etc. It is not a question of lack of knowledge of language, more a question for advice, so I don't end up creating something that works, but in the later stages will have to be rewriten for any kind of limiting reason. PS: sorry if my english is not perfect...

    Read the article

  • How (and when) to move users to mysqli and PDO_MYSQL?

    - by cj
    An important discussion on the PHP "internals" development mailing list is taking place. It's one that you should take some note of. It concerns the next step in transitioning PHP applications away from the very old mysql extension and towards adopting the much better mysqli extension or PDO_MYSQL driver for PDO. This would allow the mysql extension to, at some as-yet undetermined time in the future, be removed. Both mysqli and PDO_MYSQL have been around for many years, and have various advantages: http://php.net/manual/en/mysqlinfo.api.choosing.php The initial RFC for this next step is at https://wiki.php.net/rfc/mysql_deprecation I would expect the RFC to change substantially based on current discussion. The crux of that discussion is the timing of the next step of deprecation. There is also discussion of the carrot approach (showing users the benfits of moving), and stick approach (displaying warnings when the mysql extension is used). As always, there is a lot of guesswork going on as to what MySQL APIs are in current use by PHP applications, how those applications are deployed, and what their upgrade cycle is. This is where you can add your weight to the discussion - and also help by spreading the word to move to mysqli or PDO_MYSQL. An example of such a 'carrot' is the excellent summary at Ulf Wendel's blog: http://blog.ulf-wendel.de/2012/php-mysql-why-to-upgrade-extmysql/ I want to repeat that no time frame for the eventual removal of the mysql extension is set. I expect it to be some years away.

    Read the article

  • How to combat negative SEO?

    - by Perturbed
    Someone has decided to create a hate blog on a hosted blogging service (wordpress.com) that bashes my company. The blog contains posts that completely flame myself, my service, and contains complete falsehoods about how I run my business. Without going into details, I'm pretty sure the author of this blog is an owner of a competing service (although it is authored completely anonymously). Frankly, I'm not sure if the content would qualify for defamation or not, but I really don't like the idea of spending money on a lawyer to even attempt to prove this. I also have no interest in retorting or even replying to the blog in any sort of way -- I feel this would justify the ludicrous claims that have been posted. Unfortunately, whoever wrote the blog was pretty smart about using key words that people commonly use to search for my service. Because my customer base is relatively small and local, our PageRank is not incredibly high. As a result, when someone Google's our business name, this blog is usually within the top five results (thankfully, it's never above the business' actual website, but it's usually within eyeshot). It's incredibly frustrating to hear from customers who have seen the link (luckily, most of the time they think the author is crazy). Is there anything I can do to combat this? Would it be worthwhile to setup my own hosted wordpress.com branded blog, in an effort to trump this wordpress.com with a blog that is more active of my own? TL;DR: Someone made a hate blog using wordpress.com and is now on the first page of my business' search results. What are my options?

    Read the article

  • Endeca Information Discovery 3-Day Hands-on Training Boot-Camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    For Oracle Partners, on October 15-17, 2012 in Paris, France: Register here. The Oracle Endeca Information Discovery (OEID) Boot-Camp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. Participants will learn how to develop & implement solutions using a Data Discovery method.  Training is in English. What will be covered? The Oracle Endeca Information Discovery (OEID) Boot Camp is a three-day class with a combination of lecture and hands-on exercises, tailored to make participants aware of the Oracle Endeca Information Discovery platform, and to gain valuable skills for the implementation of projects.   Prerequisites You must bring a laptop with you for the Hands-on labs: Attendees should have experience and familiarity with the basic concepts of business intelligence and be OPN Partners with Gold or above membership.  This training is free to OPN Partners. Click here for more information. Where and When ? Monday, October 15th until Wednesday, October 17th included  9:00 - 18:00 Oracle France 15, boulevard Charles de Gaulle 92715 Colombes: Access Venue Map Register here  : NOTE there is a Limited number of seats, you will get confirmation within 2 weeks.

    Read the article

  • XNA on the TechNet Wiki

    - by Michael B. McLaughlin
    Many months ago I came across an interesting Microsoft website, the TechNet Wiki, when I was looking for information about something that I can’t even remember anymore. I noticed at the time that its section on gaming technologies was sparse and even exchanged a few emails with one of the friendly Microsoft employees who contributes there regularly about some ideas I had for the site. I seem to recall mentioning my intentions to add some articles on XNA when I found the time but between one thing and another it seemed like I was busy from the end of last Summer straight through ‘til now. Yesterday I came across the TechNet Wiki link in my miscellaneous links collection and remembered my intentions many months ago. I decided that adding XNA pages to it would make a nice project to work on while taking breaks from my other projects. So I wrote my first two articles for it: XNA Framework Overview and Content Pipeline Overview. I hope to add more in the coming days and weeks. I’d be delighted if some of my fellow XNA enthusiasts out there joined in, time permitting. Anyone else who’d like to add a page or two on a topic area you’re familiar with, this seems like a great opportunity to contribute to the community and help build a nice knowledge base to benefit all of us who are always interested in learning something new!

    Read the article

  • Strategies for managing use of types in Python

    - by dave
    I'm a long time programmer in C# but have been coding in Python for the past year. One of the big hurdles for me was the lack of type definitions for variables and parameters. Whereas I totally get the idea of duck typing, I do find it frustrating that I can't tell the type of a variable just by looking at it. This is an issue when you look at someone else's code where they've used ambiguous names for method parameters (see edit below). In a few cases, I've added asserts to ensure parameters comply with an expected type but this goes against the whole duck typing thing. On some methods, I'll document the expected type of parameters (eg: list of user objects), but even this seems to go against the idea of just using an object and let the runtime deal with exceptions. What strategies do you use to avoid typing problems in Python? Edit: Example of the parameter naming issues: If our code base we have a task object (ORM object) and a task_obj object (higher level object that embeds a task). Needless to say, many methods accept a parameter named 'task'. The method might expect a task or a task_obj or some other construct such as a dictionary of task properties - it is not clear. It is them up to be to look at how that parameter is used in order to work out what the method expects.

    Read the article

  • Architecture of a "website generator" web application

    - by Resorath
    What is the most maintainable and efficient way to architect a web application who's purpose is to host and generate websites which can be customized to a certain degree? There are a lot of these style of applications in the wild that generate all kinds of sites, from sites that host World of Warcraft guilds like guildlaunch to other sites like my wedding for wedding site hosting. My question is, what is the basic architecture that these sites operate on? I imagine there are two ways of thinking about this. A central set of code that all sites on the host run against, and it acts differently based on which site was visited. In this manner, when the base code is updated all sites are updated simultaneously. Or, the code for an individual site exists in a silo, and is simply replicated to a new directory each time a site is created. When an update needs to be applied, the code is pushed out to each site silo. In my case, I am working in PHP with the CodeIgniter framework, however the answer need not be limited to this case. Which method (if any) creates a more maintainable and efficient architecture to manage this style of web application?

    Read the article

  • How to configure KDE default settings for a new user of a group?

    - by Adobe
    I'm a sys admin on Kubuntu 11.10 machine. Where do I configure the basic config for a new user (say belonging to group "users")? Edit 1: I want to configure langauages - currently my new users get English and Bulgarian Languages. I want them to get English and Russian - and also to set Alt-CapsLock - to be the input-language-switching-combination. Edit 2: How do I configure things in /usr/share/kde4 When I do kdesudo systemsettings and save configurations - only root settings got changed - not the /usr/share/kde4 ones. Edit 3: New user gets the /etc/skel files controlling bash behaviour-appearence. What about the KDE new user's default files - where are they stored? Edit 4: Oh, I found some hints: kde4-config --path config gives a list of folders (separated by the colon) where KDE looks for configs. My machine responded with: /home/boris/.kde/share/config/ /etc/kde4/ /usr/share/kubuntu-default-settings/kde4-profile/default/share/config/ /usr/share/kde4/config/ /usr/share/desktop-base/profiles/kde-profile/share/config/ It looks like third line is where KDE takes the default options. So I found these zilions of settings - but no GUI way to configure it ((. Edit 5: Finally, I've created a dummy user, configured it, and wrote a script which gives it's settings to a given user(s). The trick - is to chown after one transfered the dot files from one user to another. I've tested it - it works fine.

    Read the article

  • Clustering Strings on the basis of Common Substrings

    - by pk188
    I have around 10000+ strings and have to identify and group all the strings which looks similar(I base the similarity on the number of common words between any two give strings). The more number of common words, more similar the strings would be. For instance: How to make another layer from an existing layer Unable to edit data on the network drive Existing layers in the desktop Assistance with network drive In this case, the strings 1 and 3 are similar with common words Existing, Layer and 2 and 4 are similar with common words Network Drive(eliminating stop word) The steps I'm following are: Iterate through the data set Do a row by row comparison Find the common words between the strings Form a cluster where number of common words is greater than or equal to 2(eliminating stop words) If number of common words<2, put the string in a new cluster. Assign the rows either to the existing clusters or form a new one depending upon the common words Continue until all the strings are processed I am implementing the project in C#, and have got till step 3. However, I'm not sure how to proceed with the clustering. I have researched a lot about string clustering but could not find any solution that fits my problem. Your inputs would be highly appreciated.

    Read the article

  • Game Messaging System Design

    - by you786
    I'm making a simple game, and have decided to try to implement a messaging system. The system basically looks like this: Entity generates message - message is posted to global message queue - messageManager notifies every object of the new message through onMessageReceived(Message msg) - if object wants, it acts on the message. The way I'm making message objects is like this: //base message class, never actually instantiated abstract class Message{ Entity sender; } PlayerDiedMessage extends Message{ int livesLeft; } Now my SoundManagerEntity can do something like this in its onMessageReceived() method public void messageReceived(Message msg){ if(msg instanceof PlayerDiedMessage){ PlayerDiedMessage diedMessage = (PlayerDiedMessage) msg; if(diedMessage.livesLeft == 0) playSound(SOUND_DEATH); } } The pros to this approach: Very simple and easy to implement The message can contain as much as information as you want, because you can just create a new Message subclass that has whatever info necessary. The cons: I can't figure out how I can recycle Message objects to a object pool, unless I have a different pool for each subclass of Message. So I have lots and lots of object creation/memory allocation over time. Can't send a message to a specific recipient, but I haven't needed that yet in my game so I don't mind it too much. What am I missing here? There must be a better implementation or some idea that I'm missing.

    Read the article

  • Are you ready to take a walk in the clouds?

    - by Steve Loethen
    Cloud computing is here, whether we want it or not.  When I say "a walk in the clouds” I am not talking about a pleasant romantic comedy, but a real alternative to hosting applications on-premise.  For years we have had the power to host our web sites on remote systems.  Sure, challenges existed.  Mostly web sites.  I could, with a few clicks, create a account at a myriad of web host sites, put my site in the hands of a remote hosting company, and boom, I was a site on the internet.  But choices, power, and management was limited. Now, we have a set of services to let us approach and power and control we love, but with scalability of the data center.  My personal web site is hosted on a laptop running hyperV in my basement.  I have to manage the machine, patch it, make sure it is powered up.  This is fine for the “hello, this is my dog skippy site” that I maintain. If the football pool I run has an issue, one of the 10 users I have calls or emails me and I go check it out.  All is well. But this falls well below the needs of even the simplest of enterprises.  A business needs a stronger datacenter, a better pipe to the world.  Do I really want to base my business on a dynamic dns and a dsl line from the local phone company? Cloud computing gives us most of what I value (control, a db of my own, updating my site from Visual Studio). Come learn how this technology can transform your business.  If you are a Microsoft shop, or are interested in Microsoft in the cloud, on April 8 and 9, a 2 day free Azure training class is being conducted in Kansas City.  http://www.azurebootcamp.com/city/kansascity Hope to see you there.  If you come, make sure you look me up.

    Read the article

  • Issue with running apt-get clean && apt-get autoclean

    - by nishanche
    root@T60ubuntuSVR:~# apt-get clean && apt-get autoclean Reading package lists... Done Building dependency tree Reading state information... Done root@T60ubuntuSVR:~# deborphan | xargs aptitude --purge remove The program 'deborphan' is currently not installed. You can install it by typing: apt-get install deborphan The following packages will be REMOVED: amarok-help-en{pu} dnsmasq-base{pu} kvpnc-data{pu} libnetfilter-conntrack3{pu} libutouch-evemu1{pu} libutouch-frame1{pu} libutouch-geis1{pu} libutouch-grail1{pu} linux-headers-3.2.0-24{pu} linux-headers-3.2.0-24-generic-pae{pu} linux-headers-3.2.0-25{pu} linux-headers-3.2.0-25-generic-pae{pu} linux-headers-3.2.0-26{pu} linux-headers-3.2.0-26-generic-pae{pu} linux-headers-3.2.0-27{pu} linux-headers-3.2.0-27-generic-pae{pu} linux-headers-3.2.0-29{pu} linux-headers-3.2.0-29-generic-pae{pu} linux-headers-3.2.0-31{pu} linux-headers-3.2.0-31-generic-pae{pu} linux-headers-3.2.0-32{pu} linux-headers-3.2.0-32-generic-pae{pu} linux-headers-3.2.0-33{pu} linux-headers-3.2.0-33-generic-pae{pu} modemmanager{pu} usb-modeswitch{pu} usb-modeswitch-data{pu} The following partially installed packages will be configured: linux-generic-pae linux-image-generic-pae{b} 0 packages upgraded, 0 newly installed, 27 to remove and 50 not upgraded. Need to get 0 B of archives. After unpacking 554 MB will be freed. The following packages have unmet dependencies: linux-image-generic-pae : Depends: linux-image-3.2.0-34-generic-pae but it is not going to be installed. The following actions will resolve these dependencies: Remove the following packages: 1) linux-generic-pae 2) linux-image-generic-pae Please tell me how to fix it.

    Read the article

  • World Location issues with camera and particle

    - by Joe Weeks
    I have a bit of a strange question, I am adapting the existing code base including the tile engine as per the book: XNA 4.0 Game Development by example by Kurt Jaegers, particularly the aspect that I am working on is the part about the 2D platformer in the last couple of chapters. I am creating a platformer which has a scrolling screen (similar to an old school screen chase), I originally did not have any problems with this aspect as it is simply a case of updating the camera position on the X axis with game time, however I have since added a particle system to allow the players to fire weapons. This particle shot is updated via the world position, I have translated everything correctly in terms of the world position when the collisions are checked. The crux of the problem is that the collisions only work once the screen is static, whilst the camera is moving to follow the player, the collisions are offset and are hitting blocks that are no longer there. My collision for particles is as follows (There are two vertical and horizontal): protected override Vector2 horizontalCollisionTest(Vector2 moveAmount) { if (moveAmount.X == 0) return moveAmount; Rectangle afterMoveRect = CollisionRectangle; afterMoveRect.Offset((int)moveAmount.X, 0); Vector2 corner1, corner2; // new particle world alignment code. afterMoveRect = Camera.ScreenToWorld(afterMoveRect); // end. if (moveAmount.X < 0) { corner1 = new Vector2(afterMoveRect.Left, afterMoveRect.Top + 1); corner2 = new Vector2(afterMoveRect.Left, afterMoveRect.Bottom - 1); } else { corner1 = new Vector2(afterMoveRect.Right, afterMoveRect.Top + 1); corner2 = new Vector2(afterMoveRect.Right, afterMoveRect.Bottom - 1); } Vector2 mapCell1 = TileMap.GetCellByPixel(corner1); Vector2 mapCell2 = TileMap.GetCellByPixel(corner2); if (!TileMap.CellIsPassable(mapCell1) || !TileMap.CellIsPassable(mapCell2)) { moveAmount.X = 0; velocity.X = 0; } return moveAmount; } And the camera is pretty much the same as the one in the book... with this added (as an early test). public static void Update(GameTime gameTime) { position.X += 1; }

    Read the article

  • Is version history really sacred or is it better to rebase?

    - by dukeofgaming
    I've always agreed with Mercurial's mantra, however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it).

    Read the article

  • SEO and external sites that serve responsive images (like Re-SRC)

    - by Baumr
    Re-SRC is a tool that allows you to automatically serve responsive images for your website from their cloud servers. It delivers a new image file each time the browser window (viewport) is resized. To use it in your HTML when linking to an image, you would do the following: <img src="http://app.resrc.it//www.your-domain.com/img/img001.jpg"/> Some more background for SEO considerations: As an example, looking at their demo page's code, the src of the Arc de Triomphe photo — when the browser window is resized to be at a tablet-width — shows this particular file at it's widest. It is found under the following URL: http://app4-uk.resrc.it/s=w560,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If the viewport is increased to desktop-width, then a smaller image is served in line with the design; see this URL: http://app4-uk.resrc.it/s=w320,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg If I change the viewport to be about half-way between those two, then the image's URL is: http://app4-uk.resrc.it/s=w240,pd1/ro=h//www.resrc.it/img/demo/demo-image-1.jpg In other words, I found that there is a separate file for every 10-pixel increment of the image width. Very cool for saving bandwidth on mobile devices and service responsive/retina images on others, but... Here are two problems I see for SEO: The img on your site, part of your semantic markup, will not be hosted on your site at all, or even a server you control. Any links to these images will pass on "link juice" to Re-SRC's site instead. You are serving a vast array of different image files to different people — some may link to one, others to another size. Then there's the question of what different search engine crawlers will see. Also: There seems to be no fallback option if their servers are down. Do you see any other concerns? Or, perhaps, do you not see those as concerns?

    Read the article

  • Should all public methods in an abstract class be marked virtual?

    - by Justin Pihony
    I recently had to update an abstract base class on some OSS that I was using so that it was more testable by making them virtual (I could not use an interface as it combined two). This got me thinking whether I should mark all of the methods that I needed virtual, or if I should mark every public method/property virtual. I generally agree with Roy Osherove that every method should be made virtual, but I came across this article that got me thinking about whether this was necessary or not. I am going to limit this down to abstract classes for simplicity, however (whether all concrete public methods should be virtual is especially debatable, I am sure). I could see where you might want to allow a sub-class to use a method, but not want it overriding the implementation. However, as long as you trust that Liskov's Substitution Principle will be followed, then why would you not allow it to be overriden? By marking it abstract, you are forcing a certain override anyway, so, it seems to me that all public methods inside of an abstract class should indeed be marked virtual. However, I wanted to ask in case there was something I might not be thinking. Should all public methods within an abstract class be made virtual?

    Read the article

  • Internet unusably slow with Realtek Semiconductor Co., Ltd. RTL8111/8168B card

    - by user42424
    So I have recently installed Ubuntu 11.10 for a dual boot with wind 7. After the install I had like 300 updates, so I installed them. At first I could use the internet, although it was extremely slow. However now I cannot, sometimes it will load and others it will simply time out. When I try to download something it will either take forever or will not at all. This is a wired system. On Windows side my speeds are fine. Any help would be greatly appreciated. Also like I said I am new to Linux/Ubuntu so please be nice. One last thing, I also installed 11.10 for same dual boot on my laptop, and wireless speed is the same as on Windows? Only the wired desktop gives me the problem? Hear is some hardware info.. Hope it helps. Mobo: Gigabyte GA=880GMA- AMD / CPU: AMD Phenom (tm) IIx4 965 / 16 GB Ram / Realtek PCIe GBE Family Controller / Cisco Linksys E2000 / Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) / eth0 Link encap:Ethernet HWaddr 50:e5:49:33:64:cf inet addr:192.168.1.118 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe33:64cf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:76722 errors:0 dropped:76722 overruns:0 frame:76722 TX packets:49692 errors:0 dropped:65 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:107956638 (107.9 MB) TX bytes:4342553 (4.3 MB) Interrupt:44 Base address:0x2000 thanks to roadmr problem solved! I powered down PC, un plugged power from pc end, waited a few (maybe 3)minutes. plugged power back in, pushed and held power button for 30 + seconds. Let go, powered on PC, and my Internet is fine! downloads and web speed blaze, just like on my Win 7 boot, maybe even faster. Problem Solved, Thanks to all!! **

    Read the article

  • Places to store basic data

    - by Ella
    I am using PHP. I'm building a fully modular CMS, which is destined for the public. Some people might view this as a framework, but I intend to write a set of extensions for it, extensions that will make it a CMS :P Because it's completely modular I have a problem figuring out how to load extensions. Practically I need to get the list of active extensions, so I can load them inside my base class. I load them by reading some file headers, which contain a "dependency" field. That field decides the order in which I have to instantiate the objects. The problem is that when the CMS starts I have no database interface, because that's an extension too, so I can't store the active extensions list in the database :) You might ask how are extensions activated in the first place. Well - in the administration interface, which is an extension as well (obviously on first install of the CMS there will be some extensions active by default). Could writing that list inside a text file be a solution? The problem is that a lot of hosts are not very nice with scripts when they write files. And since this CMS is public I might have a problem here?

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >