Search Results

Search found 5069 results on 203 pages for 'hidden premise'.

Page 109/203 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • Cloud Computing Forces Better Design Practices

    - by Herve Roggero
    Is cloud computing simply different than on premise development, or is cloud computing actually forcing you to create better applications than you normally would? In other words, is cloud computing merely imposing different design principles, or forcing better design principles?  A little while back I got into a discussion with a developer in which I was arguing that cloud computing, and specifically Windows Azure in his case, was forcing developers to adopt better design principles. His opinion was that cloud computing was not yielding better systems; just different systems. In this blog, I will argue that cloud computing does force developers to use better design practices, and hence better applications. So the first thing to define, of course, is the word “better”, in the context of application development. Looking at a few definitions online, better means “superior quality”. As it relates to this discussion then, I stipulate that cloud computing can yield higher quality applications in terms of scalability, everything else being equal. Before going further I need to also outline the difference between performance and scalability. Performance and scalability are two related concepts, but they don’t mean the same thing. Scalability is the measure of system performance given various loads. So when developers design for performance, they usually give higher priority to a given load and tend to optimize for the given load. When developers design for scalability, the actual performance at a given load is not as important; the ability to ensure reasonable performance regardless of the load becomes the objective. This can lead to very different design choices. For example, if your objective is to obtains the fastest response time possible for a service you are building, you may choose the implement a TCP connection that never closes until the client chooses to close the connection (in other words, a tightly coupled service from a connectivity standpoint), and on which a connection session is established for faster processing on the next request (like SQL Server or other database systems for example). If you objective is to scale, you may implement a service that answers to requests without keeping session state, so that server resources are released as quickly as possible, like a REST service for example. This alternate design would likely have a slower response time than the TCP service for any given load, but would continue to function at very large loads because of its inherently loosely coupled design. An example of a REST service is the NO-SQL implementation in the Microsoft cloud called Azure Tables. Now, back to cloud computing… Cloud computing is designed to help you scale your applications, specifically when you use Platform as a Service (PaaS) offerings. However it’s not automatic. You can design a tightly-coupled TCP service as discussed above, and as you can imagine, it probably won’t scale even if you place the service in the cloud because it isn’t using a connection pattern that will allow it to scale [note: I am not implying that all TCP systems do not scale; I am just illustrating the scalability concepts with an imaginary TCP service that isn’t designed to scale for the purpose of this discussion]. The other service, using REST, will have a better chance to scale because, by design, it minimizes resource consumption for individual requests and doesn’t tie a client connection to a specific endpoint (which means you can easily deploy this service to hundreds of machines without much trouble, as long as your pockets are deep enough). The TCP and REST services discussed above are both valid designs; the TCP service is faster and the REST service scales better. So is it fair to say that one service is fundamentally better than the other? No; not unless you need to scale. And if you don’t need to scale, then you don’t need the cloud in the first place. However, it is interesting to note that if you do need to scale, then a loosely coupled system becomes a better design because it can almost always scale better than a tightly-coupled system. And because most applications grow overtime, with an increasing user base, new functional requirements, increased data and so forth, most applications eventually do need to scale. So in my humble opinion, I conclude that a loosely coupled system is not just different than a tightly coupled system; it is a better design, because it will stand the test of time. And in my book, if a system stands the test of time better than another, it is of superior quality. Because cloud computing demands loosely coupled systems so that its underlying service architecture can be leveraged, developers ultimately have no choice but to design loosely coupled systems for the cloud. And because loosely coupled systems are better… … the cloud forces better design practices. My 2 cents.

    Read the article

  • Architecture : am I doing things right?

    - by Jeremy D
    I'm trying to use a '~classic' layered arch using .NET and Entity Framework. We are starting from a legacy database which is a little bit crappy: Inconsistent naming Unneeded views (view referencing other views, select * views etc...) Aggregated columns Potatoes and Carrots in the same table etc... So I ended with fully isolating my database structure from my domain model. To do so EF entities are hidden from presentation layer. The goal is to permit an easier database refactoring while lowering the impact of it on applications. I'm now facing a lot of challenges and I'm starting to ask myself if I'm doing things right. My Domain Model is highly volatile, it keeps evolving with apps as new fields needs are arising. Complexity of it keeps raising and class it contains start to get a lot of properties. Creating include strategy and reprojecting to EF is very tricky (my domain objects don't have any kind of lazy/eager loading relationship properties): DomainInclude<Domain.Model.Bar>.Include("Customers").Include("Customers.Friends") // To... IFooContext.Bars.Include(...).Include(...).Where(...) Some framework are raping the isolation levels (Devexpress Grids which needs either XPO or IQueryable for filtering and paging large data sets) I'm starting to ask myself if : the isolation of EF auto-generated entities is an unneeded cost. I should allow frameworks to hit IQueryable? Slow slope to hell? (it's really hard to isolate DevExpress framework, any successful experience?) the high volatility of my domain model is normal? Did you have similar difficulties? Any advice based on experience?

    Read the article

  • Clouds, Clouds, Clouds Everywhere, Not a Drop of Rain!

    - by sxkumar
    At the recently concluded Oracle OpenWorld 2012, the center of discussion was clearly Cloud. Over the five action packed days, I got to meet a large number of customers and most of them had serious interest in all things cloud.  Public Cloud - particularly the Oracle Cloud - clearly got a lot of attention and interest. I think the use cases and the value proposition for public cloud is pretty straight forward. However, when it comes to private cloud, there were some interesting revelations.  Well, I shouldn’t really call them revelations since they are pretty consistent with what I have heard from customers at other conferences as well as during 1:1 interactions. While the interest in enterprise private cloud remains to be very high, only a handful of enterprises have truly embarked on a journey to create what the purists would call true private cloud - with capabilities such as self-service and chargeback/show back. For a large majority, today's reality is simply consolidation and virtualization - and they are quite far off from creating an agile, self-service and transparent IT infrastructure which is what the enterprise cloud is all about.  Even a handful of those who have actually implemented a close-to-real enterprise private cloud have taken an infrastructure centric approach and are seeing only limited business upside. Quite a few were frank enough to admit that chargeback and self-service isn’t something that they see an immediate need for.  This is in quite contrast to the picture being painted by all those surveys out there that show a large number of enterprises having already implemented an enterprise private cloud.  On the face of it, this seems quite contrary to the observations outlined above. So what exactly is the reality? Well, the reality is that there is undoubtedly a huge amount of interest among enterprises about transforming their legacy IT environment - which is often seen as too rigid, too fragmented, and ultimately too expensive - to something more agile, transparent and business-focused. At the same time however, there is a great deal of confusion among CIOs and architects about how to get there. This isn't very surprising given all the buzz and hype surrounding cloud computing. Every IT vendor claims to have the most unique solution and there isn't a single IT product out there that does not have a cloud angle to it. Add to this the chatter on the blogosphere, it will get even a sane mind spinning.  Consequently, most  enterprises are still struggling to fully understand the concept and value of enterprise private cloud.  Even among those who have chosen to move forward relatively early, quite a few have made their decisions more based on vendor influence/preferences rather than what their businesses actually need.  Clearly, there is a disconnect between the promise of the enterprise private cloud and the current adoption trends.  So what is the way forward?  I certainly do not claim to have all the answers. But here is a perspective that many cloud practitioners have found useful and thus worth sharing. To take a step back, the fundamental premise of the enterprise private cloud is IT transformation. It is the quest to create a more agile, transparent and efficient IT infrastructure that is driven more by business needs rather than constrained by operational and procedural inefficiencies. It is the new way of delivering and consuming IT services - where the IT organizations operate more like enablers of  strategic services rather than just being the gatekeepers of IT resources. In an enterprise private cloud environment, IT organizations are expected to empower the end users via self-service access/control and provide the business stakeholders a transparent view of how the resources are being used, what’s the cost of delivering a given service, how well are the customers being served, etc.  But the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment. Surprised? You shouldn’t be. Just remember how the business users have been at the forefront of public cloud adoption within enterprises and private cloud is no exception.   Such a broad-based transformation makes cloud more than a technology initiative. It requires people (organizational) and process changes as well, and these changes are as critical as is the choice of right tools and technology. In my next blog,  I will share how essential it is for enterprise cloud technology to go hand-in hand with process re-engineering and organization changes to unlock true value of  enterprise cloud. I am sharing a short video from my session "Managing your private Cloud" at Oracle OpenWorld 2012. More videos from this session will be posted at the recently introduced Zero to Cloud resource page. Many other experts of Oracle enterprise private cloud solution will join me on this blog "Zero to Cloud"  and share best practices , deployment tips and information on how to plan, build, deploy, monitor, manage , meter and optimize the enterprise private cloud. We look forward to your feedback, suggestions and having an engaging conversion with you on this blog.

    Read the article

  • How to shutdown Windows 8 PC without using mouse?

    - by Gopinath
    Windows 8 sports a re-imagined desktop and tablet user interface with touch friendly Metro looks. One of the major changes in Windows 8 for a common users  is the lack of start menu, with which we got friended for more than a decade. On Windows 8 we would be missing it. As there is no start menu in Windows 8, the way you shutdown a Windows 8 computer is a bit different. To shutdown using Mouse, you need to hover on the top right edge of the screen to open the hidden menu,  go to "Settings"  tab -> "Power" -> Then choose for "Shut down", "Sleep" or "Restart".  That’s a lot of Mouse movement work and if you are a power user then you may not like to do that. How about shutting down the PC using Keyboard? Here are the two ways to shutdown the PC using keyboard Keyboard shortcuts With the help of keyboard shortcuts you can navigate to Power options of Windows 8. Press Win + C to bring the Settings Charm and use Arrows and Enter keys to navigate to access Shutdown menu. This is one of the easiest way to shutdown the PC without using Mouse. Run Command If you don’t like to go through the Setting menu, you can use the traditional Run commands. Press Win + R to open Run dialog and enter the command shutdown -s -t 0 to immediately shutdown the PC.

    Read the article

  • Can the "Documents" standard folder be rescued and how?

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary?

    Read the article

  • OOP implementation of BUFFS and Stats. Suggestion

    - by Mattia Manzo Manzati
    I am developing an MMORPG server using NodeJS. I am not sure how to implement Buffs, i mean, equipped objects or used skills have effects on the Player() which has many Stats(), some of them have a max cap... Effects can change the Stat value, increasing or decreasing it by a value, a percentage or completly rewrite the value of the stat. After a while I have decided to create a base class for buffs, which can be hidden (if they are casted from an equipped object) or shown if they came from an ability (Spell). Anyway I need suggestion how to implement it, use an array for all active buffs for a stat and have a function calculate the value of the stat affected by buffs each time I need the value of the stat or...? Other more OOP's ways to do it? I have read this What's a way to implement a flexible buff/debuff system? but this implements only a percentage system, which buffs can only say "+10%, +20%, etc...", but I would love to have an hybrid system, which can have percentage values or static values (like WoW does), and using modifiers it's hard to implement, because modifiers refers to the current value of stat :/ Thanks for suggestions :)

    Read the article

  • WCF REST Error Handler

    - by Elton Stoneman
    I’ve put up on GitHub a sample WCF error handler for REST services, which returns proper HTTP status codes in response to service errors.   The code is very simple – a ServiceBehavior implementation which can be specified in config to tag the RestErrorHandler to a service. Any uncaught exceptions will be routed to the error handler, which sets the HTTP status code and description in the response, based on the type of exception.   The sample defines a ClientException which can be thrown in code to indicate a problem with the client’s request, and the response will be a status 400 with a friendly error message:       throw new ClientException("Invalid userId. Must be provided as a positive integer");   - responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/lastLogin?userId=xyz   Error Status Code: 400, Description: Invalid userId. Must be provided as a positive integer   Any other uncaught exceptions are hidden from the client. The full details are logged with a GUID to identify the error, and the response to the client is a status 500 with a generic message giving them the GUID to follow up on:       var iUserId = 0;     var dbz = 1 / iUserId;   - logs the divide-by-zero error and responds:   Request URL http://localhost/Sixeyed.WcfRestErrorHandler.Sample/ErrorProneService.svc/dbz     Error Status Code: 500, Description: Something has gone wrong. Please contact our support team with helpdesk ID: C9C5A968-4AEA-48C7-B90A-DEC986F80DA5   The sample demonstrates two techniques for building the response. For client exceptions, a friendly HTML response is sent in the body as well as the status code and description. Personally I prefer not to do that – it doesn’t make sense to get a 400 error and find text/html when you’re expecting application/json, but it’s easy to do if that’s the functionality you want. The other option is to send an empty response, which the sample does with server exceptions.   The obvious extension is to have multiple exceptions representing all the status codes you want to provide, then your code is as simple as throwing the relevant exception – UnauthorizedException, ForbiddenExeption, NotImplementedException etc – anywhere in the stack, and it will be handled nicely.

    Read the article

  • Globacom and mCentric Deploy BDA and NoSQL Database to analyze network traffic 40x faster

    - by Jean-Pierre Dijcks
    In a fast evolving market, speed is of the essence. mCentric and Globacom leveraged Big Data Appliance, Oracle NoSQL Database to save over 35,000 Call-Processing minutes daily and analyze network traffic 40x faster.  Here are some highlights from the profile: Why Oracle “Oracle Big Data Appliance works well for very large amounts of structured and unstructured data. It is the most agile events-storage system for our collect-it-now and analyze-it-later set of business requirements. Moreover, choosing a prebuilt solution drastically reduced implementation time. We got the big data benefits without needing to assemble and tune a custom-built system, and without the hidden costs required to maintain a large number of servers in our data center. A single support license covers both the hardware and the integrated software, and we have one central point of contact for support,” said Sanjib Roy, CTO, Globacom. Implementation Process It took only five days for Oracle partner mCentric to deploy Oracle Big Data Appliance, perform the software install and configuration, certification, and resiliency testing. The entire process—from site planning to phase-I, go-live—was executed in just over ten weeks, well ahead of the four months allocated to complete the project. Oracle partner mCentric leveraged Oracle Advanced Customer Support Services’ implementation methodology to ensure configurations are tailored for peak performance, all patches are applied, and software and communications are consistently tested using proven methodologies and best practices. Read the entire profile here.

    Read the article

  • Install Ubuntu on Asus Eee-PC 1005PE - Dealing with special partitions

    - by MestreLion
    I have an Asus EeePC 1005PE netbook and im planning on doing a massive re-partitioning (going to install Ubuntu, Mint, XP, etc) Ive noticed it has 2 "special" partitions: a 10Gb Fat32 RESTORE hidden partition (used by BIOS "F9 recovery" feature) and a 16Mb "unknown" partition at the end of the drive (used by BIOS "Boot Booster" feature). So, for both partitions, my question is: Can I move/resize the recovery partition freely? What are the requirements for it? (i mean, for it still be found by BIOS when i press F9/Activate BootBooster?). Partition table order? Partition type? Flags? Label? UUID? Can i make it a Logical (instead of primary) partition? Does it must be the flagged as boot? And, more importantly: where can i find any official documentation about it? Ive ready many (mis)information about it... some say Boot Booster partition must be last (in partition table), some say Recovery must be 2nd, that it must be bootable, etc. How can I know what is really needed for the BIOS to use both F9 and Boot Booster? Note: Im using gParted from a Live USB Stick (Mint 10 / Ubuntu 10.10), and ive noticed that, since the filesystem type of the Boot Booster is not recongnized, it cant move or resize it. Can I delete it and re-create it somewhere else? Whenever i create a 0xEF partition gParted crashes and quits and i cannot open it again (must delete the partition using fdisk / cfdisk)

    Read the article

  • Firefox and Chrome Display "top: -5px differently"

    - by Kevin
    Using Google Web Toolkit, I have a DIV parent with a DIV and anchor children. <div class="unseen activity"> <div class = "unseen-label"/> <a href .../> </div> With the following CSS, Chrome shows the "unseen label" slightly below the anchor. which is positioned correctly in both Chrome and FireFox. However, FireFox shows the label in line with the anchor. .unseen-activity div.unseen-label { display: inline-block; position: relative; top: -5px; } and .unseen-activity a { background: url('style/images/refreshActivity.png') no-repeat; background-position: 0 2px; height: 20px; overflow: hidden; margin-left: 10px; display: inline-block; margin-top: 2px; padding-left: 20px; padding-right: 10px; position: relative; top: 2px; } Please tell me how to change my CSS so that Chrome render the label centered to the anchor. However, I need to keep FireFox happy and rendered correctly.

    Read the article

  • Messaging with KnockoutJs

    - by Aligned
    MVVM Light has Messaging that helps keep View Models decoupled, isolated, and keep the separation of concerns, while allowing them to communicate with each other. This is a very helpful feature. One View Model can send off a message and if anyone is listening for it, they will react, otherwise nothing will happen. I now want to do the same with KnockoutJs View Models. Here are some links on how to do this: http://stackoverflow.com/questions/9892124/whats-the-best-way-of-linking-synchronising-view-models-in-knockout http://www.knockmeout.net/2012/05/using-ko-native-pubsub.html ~ this is a great article describing the ko.subscribable type. http://jsfiddle.net/rniemeyer/z7KgM/ ~ shows how to do the subscription https://github.com/rniemeyer/knockout-postbox will be used to help with the PubSub (described in the blog post above) through the Nuget package. http://jsfiddle.net/rniemeyer/mg3hj/ of knockout-postbox   Implementation: Use syncWith for two-way synchronization. MainVM: self.selectedElement= ko.observable().syncWith (“selectedElement”); ElementListComponentVM example: self.selectedElement= ko.observable().syncWith(“selectedElement”); ko.selectedElement.subscribe(function(){ // do something with the seletion change }); ElementVMTwo: self.selectedElement= ko.observable().syncWith (“selectedElement”); // subscribe example ko.postbox.subscribe(“changeMessage”, function(newValue){ }); // or use subscribeTo this.visible = ko.observable().subscribeTo("section", function(newValue) { // do something here }); · Use ko.toJS to avoid both sides having the same reference (see the blog post). · unsubscribeFrom should be called when the dialog is hidden or closed · Use publishOn to automatically send out messages when an observable changes o ko.observable().publishOn(“section”);

    Read the article

  • Games without a(n explicit) game loop

    - by Davy8
    Most game development happens with a main game loop. Are there any good articles/blog posts/discussions about games without a game loop? I imagine they'd mostly be web games, but I'd be interested in hearing otherwise. (As a side note, I think it's really interesting that the concept is almost exclusively used in gaming as far as I'm aware, perhaps that may be another question.) Edit: I realize there's probably a redraw loop somewhere. I guess what I really mean is a loop that is hidden to you. Frames are something you as the developer are not concerned with as you're working on a higher level of abstraction. E.g. someLootItem.moveTo(inventory, someAnimatationType) and that will move from the loot box to your inventory using the specified animation type without the game developer having to worry about the implementation details of that animation. Maybe that's how "real" games end up working, but from reading most tutorials they seem to imply a much more granular level of control is used, but that might just be an artifact of being a tutorial.

    Read the article

  • Tech Talk: Managing Cloud Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Cloud computing solutions are widely hailed as a way to reduce capital expenditures yet organizations are realizing they need to also consider all of the nuances of integrating cloud applications with existing information systems.Cloud integration, after all, has a direct impact on your costs, maintenance and upgrade efforts. Catch this conversation on Tech Talk with Oracle Vice President, Amit Zavery, to understand how Oracle Fusion Middleware provides a simple and consistent method to maintaining integration interfaces across disparate systems across cloud and on-premise applications. Simplify your IT infrastructure and seamlessly manage data and application integration across your applications with Oracle solutions. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} For other Fusion Middleware talks, subscribe to Fusion Middleware Radio today and visit us on oracle.com Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Photo courtesy: www.cloudtweaks.com

    Read the article

  • Win7 no longer available after installing 12.04

    - by Michael
    I have installed Ubuntu 12.04 but my Windows 7 partition seems to have been lost. It is in sda2. Can anyone help me how to get this Windows 7 partition back without having to reinstall Windows 7? Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd45cd45c Device Boot Start End Blocks Id System /dev/sda1 2048 61433855 30715904 83 Linux /dev/sda2 * 61433856 122873855 30720000 7 HPFS/NTFS/exFAT /dev/sda3 122873856 976769023 426947584 7 HPFS/NTFS/exFAT Disk /dev/sdb: 203.9 GB, 203928109056 bytes 255 heads, 63 sectors/track, 24792 cylinders, total 398297088 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x03ee03ee Device Boot Start End Blocks Id System /dev/sdb1 * 63 20482874 10241406 c W95 FAT32 (LBA) /dev/sdb2 20482875 40965749 10241437+ 1c Hidden W95 FAT32 (LBA) /dev/sdb3 40965750 398283479 178658865 f W95 Ext'd (LBA) /dev/sdb5 40965813 76694309 17864248+ 7 HPFS/NTFS/exFAT /dev/sdb6 76694373 108856439 16081033+ 7 HPFS/NTFS/exFAT /dev/sdb7 108856503 398283479 144713488+ 7 HPFS/NTFS/exFAT Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 240 heads, 63 sectors/track, 129201 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000001 Device Boot Start End Blocks Id System /dev/sdc1 * 63 20480543 10240240+ 82 Linux swap / Solaris /dev/sdc2 20480605 1953519119 966519257+ f W95 Ext'd (LBA) /dev/sdc5 20480607 1953519119 966519256+ 7 HPFS/NTFS/exFAT

    Read the article

  • I want to be able to use the unity menu with Citrix full screen

    - by porec
    I use Citrix Reciever at work, with both XenApp and XenDesktop. Many times at the same time. Since the unity Menu stil apeirs on the top anyway, I'd like to be able to use it. Now I can see it, but it doesn't work.. I have to either tab me out, (double clicking the ALT first)opening another program first, or move the mouse to the left, opening another program from the unity menu from the left, BEFORE I can use the menu on the top.. (my menu on the left side is in autohide mode, so I actually like it :)) For example. I use spotify for lisening to music, it apeirs on the top menu, but it doesn't react when it click it.. I have to move the mouse to the left, open another program, then move to the top an ask it to show spofity. If I open spofify from the left menu, it hangs.. (since its hidden, and I have to ask it to be open, not reopen the hole program..) Or If I want to lock the screen, I have to open another program, (i.ex. nixnote) before I can lock it) since the unity menu is "on the top" anyways, I don't see the problem that it should be able to control such things..

    Read the article

  • What is the difference between the "Entire Partition" and "Entire Disc"?

    - by Roman
    I want to install Ubuntu alongside my Windows 7 operation system. During installation I have three options: Install alongside the existing OS. Remove everything and install Ubuntu. Manual partitioning (advanced). The above list is not precise (I do not remember what exactly was written there and I just write options as I have understood them). I know that option 2 is not mine. So, I need to choose either 1 or 3. I do not know which one I need to choose. I want to have a possibility to manually specify space assigned to Windows and Ubuntu (for example 40% for Windows and 60% for Ubuntu). I chose the 1st option and I saw a window with the following information. Allocate drive space by dragging the drive bellow. File (48.1 GB) Ubuntu /dev/sda2 (ntfs) /dev/sda3 (ext4) 286.6 GB 241.7 GB 2 small partitions are hidden, use the advanced partitioning tool for more control. [use entire partition] [use entire disk] [Quit] [Back] [Install Now] My problem is that I do not understand what I see. In particular I can press [use entire partition] or [use entire disk] and I do not know what is the difference. Moreover, as far as I understand, I can even press [Install Now] without pressing one of the two above mentioned buttons. So, I have 3 options. What is the difference between them? The most important thing for me is not to delete the old operation system with all the data stored there.

    Read the article

  • Is it possible to hide Launcher for certain apps?

    - by Przemek
    As 14.04LTS has been released I thought I'd try to make larger switch to Ubuntu - especially considering most of the apps I use at work are at last available in Linux versions (with the major exception being Rhino3d v5 - hope that it will be possible to launch it with Wine somehow). But as I use my PC for 3D design I need every damn inch of screen space. And this is where Launcher becomes a pain. While in general I like it (as well as the rest of Unity) when I do office work (emails, docs etc) it has turned out to be a major pain with 3D apps and tablet. I'd like to set launcher to hide when certain apps are maximized. Is it possible? If not is it possible to set it as intellihide/stay in the background globally, so it won't be visible when any app is maximized? Autohide is (sadly) not a good solution - the way Ubuntu handles revealing the hidden bar is tricky to work with when you use a graphic tablet (but to be honest I have gripes with it even when using a mouse). I need the bar to disappear or stay in the background so it won't take screen space - 3D apps have way too much menus that eat valuable screen space already.

    Read the article

  • Create a system image in Windows 8

    - by Greg Low
    One of the things that I've just come to accept is that the designers of Windows 8 and I think very differently.It'll take a long time to convince me that shutting down the computer is a "setting". Even after using Windows 8 for quite a while now, I still find that I struggle nearly every day, just trying to do things that I previously knew how to do. That's just not a good thing.Today I decided to create a system image as I hadn't made one lately. I started in Control Panel looking for Backup options. That yielded nothing except programs that wanted to "Save backup copies of my files with file history". I thought "oh well, let's just try the new search options". I hit the Windows key and typed "Backup". No, nothing came up there either.I searched again all over the Control Panel options to no avail.So it was time to hit Google again. Once again, clearly lots of people used to know how to do this and have been trying to work out where this option went.The first trick is that there are a bunch of Control Panel options that don't appear in the Control Panel. In the address bar at the top, if you click on Control Panel, you'll find there is an option that says "All Control Panel Options". That is curious given that's where I thought I was when I opened Control Panel. No hint is given on that screen that there are a bunch of hidden options. None the less, I then checked out "all" the options.The option that you need to create a system image in Windows 8 turns out to be the "Windows 7 File Recovery" option that appears in this extended list. Why does it say "Windows 7" when it's for "Windows 8" as well and I'm running "Windows 8"? Why do I have to choose an option that says "File Recovery" to create a system image backup?<sigh>But at least I've recorded it here for the next time I forget where to find it.

    Read the article

  • Using a random string to authenticate HMAC?

    - by mrwooster
    I am designing a simple webservice and want to use HMAC for authentication to the service. For the purpose of this question we have: a web service at example.com a secret key shared between a user and the server [K] a consumer ID which is known to the user and the server (but is not necessarily secret) [D] a message which we wish to send to the server [M] The standard HMAC implementation would involve using the secret key [K] and the message [M] to create the hash [H], but I am running into issues with this. The message [M] can be quite long and tends to be read from a file. I have found its very difficult to produce a correct hash consistently across multiple operating systems and programming languages because of hidden characters which make it into various file formats. This is of course bad implementation on the client side (100%), but I would like this webservice to be easily accessible and not have trouble with different file formats. I was thinking of an alternative, which would allow the use a short (5-10 char) random string [R] rather than the message for autentication, e.g. H = HMAC(K,R) The user then passes the random string to the server and the server checks the HMAC server side (using random string + shared secret). As far as I can see, this produces the following issues: There is no message integrity - this is ok message integrity is not important for this service A user could re-use the hash with a different message - I can see 2 ways around this Combine the random string with a timestamp so the hash is only valid for a set period of time Only allow each random string to be used once Since the client is in control of the random string, it is easier to look for collisions I should point out that the principle reason for authentication is to implement rate limiting on the API service. There is zero need for message integrity, and its not a big deal if someone can forge a single request (but it is if they can forge a very large number very quickly). I know that the correct answer is to make sure the message [M] is the same on all platforms/languages before hashing it. But, taking that out of the equation, is the above proposal an acceptable 2nd best?

    Read the article

  • Is software support an option for your career?

    - by Maria Sandu
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 If you have a technical background, why should you choose a career in support? We have invited Serban to answer these questions and to give us an overview of one of the biggest technical teams in Oracle Romania. He’s been with Oracle for 7 years leading the local PeopleSoft Financials & Supply Chain Support team. Back in 2013 Serban started building a new support team in Romania – Fusion HCM. His current focus is building a strong support team for Fusion HCM, latest solution for Business HR Professionals from Oracle. The solution is offered both on Premise (customer site installation) but more important as a Cloud offering – SaaS.  So, why should a technical person choose Software Support over other technical areas?  “I think it is mainly because of the high level of technical skills required to provide the best technical solutions to our customers. Oracle Software Support covers complex solutions going from Database or Middleware to a vast area of business applications (basically covering any needs that a large enterprise may have). Working with such software requires very strong skills both technical and functional for the different areas, going from Finance, Supply Chain Management, Manufacturing, Sales to other very specific business processes. Our customers are large enterprises that already have a support layer inside their organization and therefore the Oracle Technical Support Engineers are working with highly specialized staff (DBA’s, System/Application Admins, Implementation Consultants). This is a very important aspect for our engineers because they need to be highly skilled to match our customer’s specialist’s expectations”.  What’s the career path in your team? “Technical Analysts joining our teams have a clear growth path. The main focus is to become a master of the product they will support. I think one need 1 or 2 years to reach a good level of understanding the product and delivering optimal solutions because of the complexity of our products. At a later stage, engineers can choose their professional development areas based on the business needs and preferences and then further grow towards as technical expert or a management role. We have analysts that have more than 15 years of technical expertise and they still learn and grow in technical area. Important fact is, due to the expansion of the Romanian Software support center, there are various management opportunities. So, if you want to leverage your experience and if you want to have people management responsibilities Oracle Software Support is the place to be!”  Our last question to Serban was about the benefits of being part of Oracle Software Support. Here is what he said: “We believe that Oracle delivers “State of the art” Support level to our customers. This is not possible without high investment in our staff. We commit from the start to support any technical analyst that joins us (being junior or very senior) with any training needs they have for their job. We have various technical trainings as well as soft-skills trainings required for a customer facing professional to be successful in his role. Last but not least, we’re aiming to make Oracle Romania SW Support a global center of excellence which means we’re investing a lot in our employees.”  If you’re looking for a job where you can combine your strong technical skills with customer interaction Oracle Software Support is the place to be! Send us your CV at [email protected]. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Do we ethically have the right to use the MAC Address for verification purposes?

    - by Matt Ridge
    I am writing a program, or starting at the very beginning of it, and I am thinking of purchase verification systems as a final step. I will be catering to Macs, PCs, and possibly Linux if all is said and done. I will also be programming this for smartphones as well using C++ and Objective-C. (I am writing a blueprint before going head first into it) That being said, I am not asking for help on doing it yet, but what I’m looking for is a realistic measurement for what could be expected as a viable and ethical option for purchase verification systems. Apple through the Apple Store, and some other stores out there have their own "You bought it" check. I am looking to use a three prong verification system. Email/password 16 to 32 character serial number using alpha/numeric and symbols with Upper and lowercase variants. MAC Address. The first two are in my mind ok, but I have to ask on an ethical standpoint, is a MAC Address to lock the software to said hardware unethical, or is it smart? I understand if an Ethernet card changes if not part of the logic board, or if the logic board changes so does the MAC address, so if that changes it will have to be re-verified, but I have to ask with how everything is today... Is it ethical to actually use the MAC address as a validation key or no? Should I be forward with this kind of verification system or should I keep it hidden as a secret? Yes I know hackers and others will find ways of knowing what I am doing, but in reality this is why I am asking. I know no verification is foolproof, but making it so that its harder to break is something I've always been interested in, and learning how to program is bringing up these questions, because I don't want to assume one thing and find out it's not really accepted in the programming world as a "you shouldn't do that" maneuver... Thanks in advance... I know this is my first programming question, but I am just learning how to program, and I am just making sure I'm not breaking some ethical programmer credo I shouldn't...

    Read the article

  • Ubuntu 12.04 menu bar, nautilus, terminal, and gtk themes not working after installation of Gimp 2.8

    - by Chris
    I installed gimp2.8 from this ppa: ppa:otto-kesselgulasch/gimp after that, my system began having problems. This is my thought process in trying to fix what's happened and the order in which it happened: I noticed the menu bar at the top changed from an opaque black to perfectly clear and the titles of applications and the hidden buttons reacted slowly. No big deal, restarted to see if it fixed it. It didn't, in fact, when the logon screen came up, the password field was grey and boxy like a default windows 98 theme (that's the best I can describe it) as were all the option buttons for gtk programs. I open terminal to try and reinstall gtk, but the terminal is just a black screen with no ability to input commands. I go to a tty and I reinstalled gtk3 and gtk2 (I have both on my system. I don't think they're in conflict, they hadn't been before hand). I restarted. Nothing doing. Log in, nautilus isn't placing icons on my desktop. I click the launcher. It flashes, but no window opens. Try to open by Alt+f2, nothing. I purge ubuntu-desktop, restart, reinstall ubuntu-desktop. Nothing. I have no clue what to do at this point so I'm asking for any help diagnosing the problem and fixing it.

    Read the article

  • What is the *right* way to use gnome-shell integrated chat?

    - by stevejb
    Please bear with me as I am still figuring out how to use gnome-shell. My question concerns how to use the integrated chat correctly. I have the following questions: 1) When people chat with me, it pops up as a notification on the hidden bar at the bottom of the screen, and then that chat stays there so I can access it later. How do I initiate a chat in this manner, without opening an empathy window? What I have been doing is Hitting super key Typing in the person's name, which brings up contacts Initiate the chat using empathy Immediately close the chat window When the person responds, it comes through as a notification. I then proceed to interact with the chat this way. 2) What is the keyboard shortcut for bringing up the notifications bar? Ideally, I would like to have the following experience Use some keyboard shortcut to bring up notifications Begin typing the name of the notification that I wish to investigate, and have the matching work in a fuzzy manner, much like Ido mode's buffer switching matching in Emacs When then right name is matched, I hit enter and then bring up the chat with that person as that popup notification. Are these behaviours supported? If not, I would be happy to work on implementing them. I am an experienced programmer, but not familiar with gnome-shell. If someone would point me in the right direction in terms of if this behaviour is supported, or where in the gnome-shell framework would I add to to get this behaviour, I would really appreciate it. Thanks!

    Read the article

  • monitor height differences & the mouse going off screen

    - by fastmultiplication
    In ubuntu 10.10 I have a dual monitor setup. I have an nVidia graphics card and am using twinview. One of the monitors is 1024 pixels high and the other is 900. In the monitor configuration screen & in real life, I have them set up side by side, 1024 on the left. The result of this is that when I am on the bottom of the left monitor and move the mouse to the right, it goes into the hidden area below the right monitor's visible area. It seems like it would make a lot more sense for it to be bumped up to the bottom of the right monitor - since one almost never wants to move the mouse into an area of the screen that doesn't show up. And, systems I have used before have been set up that way. How can I set this up? I am not interested in lists of window managers for ubuntu; I would like to know the identity of a particular WM or set of steps I can take to solves the particular problem I have outlined above. Thanks! EDIT: I changed to use two seperate X window monitors, and set them up relatively positioned so that just the corner touches and the mouse can cross there, so the difference in heights doesn't matter.

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >