Search Results

Search found 34141 results on 1366 pages for 'even mien'.

Page 331/1366 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • How is Sencha Touch performing on Android in practice ?

    - by Vidar Vestnes
    Hi I'm just about to start a project using Sencha Touch, and just done some minor testing on my HTC desire device. All tutorial videos at Vimeo seems to be using an iPhone emulator running on a Mac. Im not sure how fast this emulator is compared to a real iPhone device or even an real Android device, but from what i have experienced, it seems that my HTC desire is not performing that nicly as this emulator. All animations (sliding, fading, etc) seems abit laggy. You can easily notice that the FPS is much less than on the Vimeo videos. HTC desire is a relativly new and modern Android 2.2 phone, running with decent hardware, so im wondering if Sencha Touch is "ready" for the Android platform. Anybody with practical experience with Android and Sencha Touch ?

    Read the article

  • Is it bad practice to output from within a function?

    - by Nick
    For example, should I be doing something like: <?php function output_message($message,$type='success') { ?> <p class="<?php echo $type; ?>"><?php echo $message; ?></p> <?php } output_message('There were some errors processing your request','error'); ?> or <?php function output_message($message,$type='success') { ob_start(); ?> <p class="<?php echo $type; ?>"><?php echo $message; ?></p> <?php return ob_get_clean(); } echo output_message('There were some errors processing your request','error'); ?> I understand they both achieve the same end result, but are there benefits doing one way over the other? Or does it not even matter?

    Read the article

  • Changes to 'resolveconf' discarded when connecting to a new network

    - by sudheer
    I have upgraded to 12.10 from 12.04 recently and I am having issues with connecting to the Internet. I got an IP address and am able to ping other LAN IPs in the local network but I am unable to connect to the Internet and am even unable to ping www.google.com from a terminal. Somehow making changes in /etc/resolv.conf and restarting resolvconf service and rebooting works but I need to do this every time I connect to a new network. How do I make these changes permanent? Can someone suggest a solution to this issue?

    Read the article

  • Unknown error XNA cannot detect importer for "program.cs"

    - by Evan Kohilas
    I am not too sure what I have done to cause this, but even after undoing all my edits, this error still appears Error 1 Cannot autodetect which importer to use for "Program.cs". There are no importers which handle this file type. Specify the importer that handles this file type in your project. (filepath)\Advanced Pong\AdvancedPongContent\Program.cs Advanced Pong After receiving this error, everything between #if and #endif in the program.cs fades grey using System; namespace Advanced_Pong { #if WINDOWS || XBOX static class Program { /// <summary> /// The main entry point for the application. /// </summary> static void Main(string[] args) { using (Game1 game = new Game1()) { game.Run(); } } } #endif } I have searched this and could not find a solution anywhere. Any help is appreciated.

    Read the article

  • Chart Chooser Helps You Pick the Right Chart for the Job

    - by Jason Fitzpatrick
    If you’re not sure what kind of chart would best showcase the data you’re presenting, Chart Chooser makes short work of narrowing it down. Are you trying to showcase trends? Compare the composition of sets? Show distributions and trends together? By selecting what you’re trying to highlight, Chart Chooser automatically narrows the pool of chart types to show which would effectively achieve your end. Once you’ve narrowed it down to the chart type you want, you can even download an Excel template for that chart type and populate it with your own data. Hit up the link below to take it for a spin and grab some free templates. Chart Chooser [via Flowing Data] How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Besides macros, are there any other metaprogramming techniques?

    - by mhr
    I'm making a programming language, and, having spent some time in Lisp/Scheme, I feel that my language should be malleable. Should I use macros, or is there something else I might/should use? Is malleable syntax even a good idea? Is it perhaps too powerful a concept? EDIT: In doing some research, I found fexprs. I don't really understand what these are. Help with that in an answer too please. EDIT2: Is it possible to have a language with macros/something-of-a-similar-nature without having s-expressions?

    Read the article

  • transparent home directory encryption

    - by user86458
    #1 I'm very new to the ubuntu home directory encryption or rather ecryptfs folder encryption. I read about the same within Dustin's blog & tried implementing it. Problem or query is my home directory is encrypted & has a www folder ... now when I reboot the system the decryption doesnt happen at startup/boot & apache is not able to find the files rather the folder www since it is not mounted ... in order to mount it I have to login .... is there a way by which an encrypted home / private folder can be mounted at boot without human intervention ? #2 I have installed ubuntu server 11.10 & had selected "encrypt home directory" when installing the same. With ubuntu things are working transparently even after reboot & without logging in. Kindly can anyone pls explain or guide on the same ?

    Read the article

  • In centralized version control, is it always good to update often?

    - by janos
    Assuming that: You are in a team developing some software. Your team is using centralized version control in the development process. You are working on a new feature which will surely take several days to complete, and you won't be able to commit before that because it would break the build. Your team members commit something every day that affects some of the files you're working with for your fancy new feature. Since this is centralized version control, you will have to update your local checkout at some point: at least once right before committing the new feature. If you update only once right before your commit, then there might be a lot of conflicts due to the many other changes by your teammates, which could be a world of pain to resolve all at once. Or, you could update often, and even if there are a few conflicts to resolve day by day, it should be easier to do, little by little. Can we say that it is always a good idea to update often?

    Read the article

  • What's My Problem? What's Your Problem?

    - by Jacek Ziabicki
    Software installers are not made for building demo environments. I can say this much after 12 years (on and off) of supporting my fellow sales consultants with environments for software demonstrations. When we release software, we include installation programs and procedures that are designed for use by our clients – to build a production environment and a limited number of testing, training and development environments. Different Objectives Your priorities when building an environment for client use vs. building a demo environment are very different. In a production environment, security, stability, and performance concerns are paramount. These environments are built on a specific server and rarely, if ever, moved to a different server or different network address. There is typically just one application running on a particular server (physical or virtual). Once built, the environment will be used for months or years at a time. Because of security considerations, the installation program wants to make these environments very specific to the organization using the software and the use case, encoding a fully qualified name of the server, or even the IP address on the network, in the configuration. So you either go through the installation procedure for each environment, or learn how to clone and reconfigure the software as a separate instance to build all your non-production environments. This may not matter much if the installation is as simple as clicking on the Setup program. But for enterprise applications, you have a number of configuration settings that you need to get just right – so whether you are installing from scratch or reconfiguring an existing installation, this requires both time and expertise in the particular piece of software. If you need a setup of several applications that are integrated to talk to one another, it is a whole new level of complexity. Now you need the expertise in all of the applications involved (plus the supporting technology products), and in addition to making each application work, you also have to configure the integration endpoints. Each application needs the URLs and credentials to call the integration layer, and the integration must be able to call each application. Then you have to make sure that each app has the right data so a business process initiated in one application can continue in the next. And, you will need to check that each application has the correct version and patch level for the integration to work. When building demo environments, your #1 concern is agility. If you can get away with a small number of long-running environments, you are lucky. More likely, you may get a request for a dedicated environment for a demonstration that is two weeks away: how quickly can you make this available so we still have the time to build the client-specific data? We are running a hands-on workshop next month, and we’ll need 15 instances of application X environment so each student can have a separate server for the exercises. We cannot connect to our data center from the client site, the client’s security policy won’t allow our VPN to go through – so we need a portable environment that we can bring with us. Our consultants need to be able to work at the hotel, airport, and the airplane, so we really want an environment that can run on a laptop. The client will need two playpen environments running in the cloud, accessible from their network, for a series of workshops that start two weeks from now. We have seen all of these scenarios and more. Here you would be much better served by a generic installation that would be easy to clone. Welcome to the Wonder Machine The reason I started this blog is to share a particular design of a demo environment, a special way to install software, that can address the above requirements, even for integrated setups. This design was created by a team at Oracle Utilities Global Business Unit, and we are using this setup for most of our demo environments. In a bout of modesty we called it the Wonder Machine. Over the next few posts – think of it as a novel in parts – I will tell you about the big idea, how it was implemented and what you can do with it. After we have laid down the groundwork, I would like to share some tips and tricks for users of our Wonder Machine implementation, as well as things I am learning about building portable, cloneable environments. The Wonder Machine is by no means a closed specification, it is under active development! I am hoping this blog will be of interest to two groups of readers – the users of the Wonder Machine we have built at Oracle Utilities, who want to get the most out of their demo environments and be able to reconfigure it to their needs – and to people who need to build environments for demonstration, testing, training, development and would like to make them cloneable and portable to maximize the reuse of their effort. Surely we are not the only ones facing this problem? If you can think of a better way to solve it, or if you can help us improve on our concept, I will appreciate your comments!

    Read the article

  • Suggestions needed on an architecture for a multiple clients and customisable web application

    - by ValidfroM
    Our product is a web based course managemant system. We have 10+ clients and in future we may get more clients. (Asp.net,SQL Server) Currently if one of our customers need extra functionality or customised business logic, we will change the db schema and code to meet the needs. (we only have one branch code base and one database schema) To make the change wont affect each others route, we use a client flag, which defined in a web config file, thus those extra fields and biz logic only applied to a particular customer's system. if(ClientId = 'ABC') { //DO ABC Stuff } else { //Normal Route } One of our senior colleagues said, in this way, small company like us can save resources on supporting multiple resources. But what I feel is, this strategy makes our code and database even harder to maintain. Anyone there crossed similar situation? How do you handle that?

    Read the article

  • Number of Classes in a Namespace - Code Smell?

    - by Tim Claason
    I have a C# library that's used by several executables. There's only a couple namespaces in the library, and I just noticed that one of the namespaces has quite a few classes in it. I've always avoided having too many classes in a single namespace because of categorization, and because subconsciously, I think it looks "prettier" to have a deeper hierarchy of namespaces. My question is: does anyone else consider it a "code smell" when a namespace has many classes - even if the classes relate to each other? Would you put in a lot of effort to find nuances in the classes that allows for subcategorization?

    Read the article

  • Force Your Mac to Sort Folders on Top of Files (Windows Style)

    - by Eric Z Goodnight
    Even die-hard Mac converts have their issues with Mac OS, and one of those problems is that OS X lists folders mixed in with all other files. Here’s how to fix that in under five minutes with a clever hack. You know you’ve had that issue. You’ve dug through your files looking for that one elusive folder, and because it’s jumbled in with all the other stuff, it’s more or less impossible to find. Have no fear, with no downloads or silly plug-in software, you can finally make Mac OS behave like Windows and Linux and list those folders in the proper order.  How To Encrypt Your Cloud-Based Drive with BoxcryptorHTG Explains: Photography with Film-Based CamerasHow to Clean Your Dirty Smartphone (Without Breaking Something)

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • How to disallow indexing but allow crawling?

    - by John Doe
    In the front page of my website, I have some previews to articles (with a small introduction to them) that link to the full articles. I want to disallow the front page to prevent duplicate content. But if I do this (in robots.txt), would it still be crawled? I mean, the full articles would be still reached by the crawler even though I disallowed the only page that links to them? I don't want the webcrawler not to access the page and enter the links in them, but I just don't want it to save the information (that will be repeated in the full articles).

    Read the article

  • Making Class Diagram for MVC Pattern Project

    - by iMohammad
    I have a question about making a class diagram for an MVC based college senior project. If we have 2 actors of users in my system, lets say Undergrad and Graduate students are the children of abstract class called User. (Generalisation) Each actor has his own features. My question, in such case, do we need to have these two actors in separate classes which inherits from the abstract class User? even though, I'm going to implement them as roles using one Model called User Model ? I think you can see my confusion here. I code using MVC pattern, but I've never made a class diagram for this pattern. Thank you in advance!

    Read the article

  • Triggering an action when a specific volume is connected

    - by ereOn
    I have a USB key that contains my keepass2 password database and I'd like to perform some actions when it is plugged into my computer, namely: Auto-mount it to some specific location When the mounting is done properly, launching keepass2 on the password database file Simple tasks I guess, but I can't find how to do that. I'm using Ubuntu 12.10, and it auto-mounts the device as a "media usb-key" and tries to open the images on it (even though there are none). What is the best way to do that and to disable the ubuntu auto-mounting (so it doesn't conflict) ?

    Read the article

  • Ubuntu 13.04 on UEFI system with Windows Boot Manager as the main loader

    - by Mehrdad
    On my old laptop (legacy BIOS, MBR disk), this was perfectly possible to get working: I turn on the computer and see the Windows Boot Manager I use EasyBCD (or BootPart, or something else) to add an option to the BCD menu which allows me to boot into GRUB, and then into Ubuntu I can't figure how to do this on my new laptop (UEFI, GPT disk), whether in UEFI or legacy mode. Currently I've installed (and even booted!) Ubuntu on my laptop, but only with the help of an external GRUB (on a USB flash drive). How can I add GRUB as an option in the Windows Boot Manager on a UEFI laptop? (No, I don't want to change my primary boot loader. So no, I don't want to overwrite the Windows boot loader with GRUB.)

    Read the article

  • How to use short breaks at work effectively for self-development?

    - by Alaudo
    At the moment my daily work as a developer requires me to have short 10-20 min breaks after every 2-3 hours. It would be nice if I could use those effectively to improve my expertise in programming or CS in general. I tried several things: Reading jokes online gets boring very soon. Trying to solve some (even the most simple) tasks from different code contests requires more time, as long as I have some idea of an algorithm the time is over. Reading a randomly picked Wikipedia-article about Computer Science: depending upon the article sometimes it requires more time and is not an easy reading for a break. So, I ended up reading StackOverflow questions and answers with most votes: that is entertaining and educative. Do you have any other suggestions?

    Read the article

  • glsl demo suggestions ?

    - by brainydexter
    In a lot of places I interviewed recently, I have been asked many a times if I have worked with shaders. Even though, I have read and understand the pipeline, the answer to that question has been no. Recently, one of the places asked me if I can send them a sample of 'something' that is "visually polished". So, I decided to take the plunge and wrote some simple shader in GLSL(with opengl).I now have a basic setup where I can use vbos with glsl shaders. I have a very short window left to send something to them and I was wondering if someone with experience, could suggest an idea that is interesting enough to grab someone's attention. Thanks

    Read the article

  • Any good web frameworks for asynchronous multiplayer games?

    - by Steven Stadnicki
    I'm trying to craft a site for web-based (original) board games, and my client (currently written in Actionscript, but that's highly fungible) works fine - I can play solitaire games in the client - but it has nothing to connect to. What I'm looking for is a server framework for handling accounts/authentication and game tracking: something that would let players log in, show them a list of their current games, let them invite friends to new games, let them make moves in the games they have open, etc. I'm flexible on language; obviously I'm going to have to write a lot of server code to handle the actual game logic, but that should be straightforward enough. I'm more concerned with how to handle the user (and game) DBs, though suggestions for a good server framework for communicating with the DBs (and serving up, most likely, JSON for client communications) are also welcome. Right now my leaning is towards Ruby (probably with Rails) but as far as I can determine it would be a pretty good chunk of effort to set up the necessary databases, so having something even higher-level would be really useful to me.

    Read the article

  • Can I use Ubuntu as a wireless media server which performs all decoding/processing server-side?

    - by AthloX
    I want to setup UBUNTU 12.04 desktop as Home media server. I have window 7 netbook and UBUNTU 12.04lts laptop even a samsun galaxy note tablet (android). Two desktop in other room with dualboot win7 and ubuntu. SHARP AQUOS Plasma Tv with Wi-Fi connected. I want to install ubuntu as media server to stream audio/video files over wi-fi. Not only this i want this media server to use its own processing power to decode ans stream so that on remote end only file can play without using their own resource. Is it possible to use ubuntu as media server to stream files without making the remote end to use there own resource. I want only bandwidth of Wi-Fi to be use in this and media center hardware resource.Remote end gadget should use only speaker and screen and not processing power of their own. Please any suggestion is it possible to do so ?

    Read the article

  • Cannot read from 2nd SATA data drive connected via SATA docking station

    - by Robbo
    Installed 10.10 this week on dual boot system. Everything else works fine but cannot read from 2nd SATA drive with all my data. Same drive works normally when booted to Windows XP. Interesting part is that I can see the drive in Ubuntu Disk Manager, can read all its attributes, can test it, shows up in Disk Manager, Storage Device Manager and Mount Manager, and can mount it, even change attributes; it appears healthy but does not show up in "Computer" or anywhere else that it can be accessed. The drive is connected via an external e-SATA docking station which is connected to a SATA port on the motherboard.

    Read the article

  • DTLoggedExec 1.0.0.2 Released

    - by Davide Mauri
    These last days have been full of work and the next days, up until the end of july, will follow the same ultra-busy scheme. This makes the improvement of DTLoggedExec a little bit slower than what I desire, but nonetheless Friday I’ve been able to relase an updated version of the tool that fixes a bug and add a very convenient option to make even more straightforward the creationg of execution logs: [bugfix] Fixed a bug that prevented loading packages from SSIS Package Store [new] Added support for {filename} placeholder in both Data Flow Profiling and CSV Log Provider The added feature allow to generate DataFlow profile logs and CSV logs that has the same name of the package that generated them, es: DTLoggedExec.exec /FILE:”MyPackage.dtsx” /LPA:"FILE=C:\Log\{filename}_{date}_{time}.dtsCSVLog" Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Nautilus only starts as root user

    - by user7978
    Hello. I am running Ubuntu 10.04 64-bit. When I attempt to start Nautilus from the command line, it does not appear -- although a PID is generated. As root/sudo, I can start Nautilus fine. One note: I run e16 as the windows manager, so I do not use Nautilus to draw my desktop. However, even under this configuration, Nautilus used to run fine as a "regular" user. The permissions for Nautilus are the same as the other packages in /usr/bin. I believe this is a Gnome issue, but I'm fumbling at this point.

    Read the article

  • Podcast with AJI about iOS development coming from a .NET background

    - by Tim Hibbard
    I talked with Jeff and John from AJI Software the other day about developing for the iOS platform. We chatted about learning Xcode and Objective-C, provisioning devices and the app publishing process. We all have a .NET background and made lots of comparisons between the two platforms/ecosystems/fanbois. They even let me throw in a plug for Christian Radio Locator. Jeff was my first contact with the Kansas City .NET community. It was probably about 10 years ago. He pushed me to talk more (and rescued me from my first talk that bombed) and blog more. One time a group of us took a 16 hour car trip to South Carolina for a code camp and live podcasted the whole thing. Good times.Listen to the show Click here to subscribe to more AJI Reports in the future.

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >