Search Results

Search found 75052 results on 3003 pages for 'adf bam data control prod'.

Page 180/3003 | < Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >

  • Disable updates on certain softwares

    - by tadatma
    When on trying to update an ubuntu distro (or for that matter any linux distribution) I often find a list of updates amounting to more than 150 mb or so. To my displeasure I find that the culprit more often than not has to do with libreoffice. I know I can untick those connected with libreoffice but I wonder if there is an elegant way; may be a small program in between that helps me untick those programs that I wish to stay un-updated.

    Read the article

  • Send raw data to USB parallel port after upgrading to 11.10 oneiric

    - by zaphod
    I have a laser cutter connected via a generic USB to parallel adapter. The laser cutter speaks HPGL, as it happens, but since this is a laser cutter and not a plotter, I usually want to generate the HPGL myself, since I care about the ordering, speed, and direction of cuts and so on. In previous versions of Ubuntu, I was able to print to the cutter by copying an HPGL file directly to the corresponding USB "lp" device. For example: cp foo.plt /dev/usblp1 Well, I just upgraded to Ubuntu 11.10 oneiric, and I can't find any "lp" devices in /dev anymore. D'oh! What's the preferred way to send raw data to a parallel port in Ubuntu? I've tried System Settings Printing + Add, hoping that I might be able to associate my device with some kind of "raw printer" driver and print to it with a command like lp -d LaserCutter foo.plt But my USB to parallel adapter doesn't seem to show up in the list. What I do see are my HP Color LaserJet, two USB-to-serial adapters, "Enter URI", and "Network Printer". Meanwhile, over in /dev, I do see /dev/ttyUSB0 and /dev/ttyUSB1 devices for the 2 USB-to-serial adapters. I don't see anything obvious corresponding to the HP printer (which was /dev/usblp0 prior to the upgrade), except for generic USB stuff. For example, sudo find /dev | grep lp produces no output. I do seem to be able to print to the HP printer just fine, though. The printer setup GUI gives it a device URI starting with "hp:" which isn't much help for the parallel adapter. The CUPS administrator's guide makes it sound like I might need to feed it a device URI of the form parallel:/dev/SOMETHING, but of course if I had a /dev/SOMETHING I'd probably just go on writing to it directly. Here's what dmesg says after I disconnect and reconnect the device from the USB port: [ 924.722906] usb 1-1.1.4: USB disconnect, device number 7 [ 959.993002] usb 1-1.1.4: new full speed USB device number 8 using ehci_hcd And here's how it shows up in lsusb -v: Bus 001 Device 008: ID 1a86:7584 QinHeng Electronics CH340S Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x1a86 QinHeng Electronics idProduct 0x7584 CH340S bcdDevice 2.52 iManufacturer 0 iProduct 2 USB2.0-Print iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0x80 (Bus Powered) MaxPower 96mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x02 EP 2 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0020 1x 32 bytes bInterval 0 Device Status: 0x0000 (Bus Powered)

    Read the article

  • Why is git-svn useful?

    - by Wes
    I have read these related questions: I'm a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS? git for personal (one-man) projects. Overkill? ...and I understand why git is useful. What I don't understand is why tools like git-svn that allow git to integrate with svn are useful. When, for example, a team is working with svn, or any other centralised SCM, why would a member of the team opt to use git-svn? Are there any practical advantages for a developer that has to synchronize with a centralized repository?

    Read the article

  • Data Conversion in SQL Server

    Most of the time, you do not have to worry about implicit conversion in SQL expressions, or when assigning a value to a column. Just occasionally, though, you'll find that data gets truncated, queries run slowly, or comparisons just seem plain wrong. Robert Sheldon explains why you sometimes need to be very careful if you mix data types when manipulating values. Free trial of SQL Backup™“SQL Backup was able to cut down my backup time significantly AND achieved a 90% compression at the same time!” Joe Cheng. Download a free trial now.

    Read the article

  • Variable number of GUI Buttons

    - by Wakaka
    I have a generic HTML5 Canvas GUI Button class and a Scene class. The Scene class has a method called createButton(), which will create a new Button with onclick parameter and store it in a list of buttons. I call createButton() for all UI buttons when initializing the Scene. Because buttons can appear and disappear very often during rendering, Scene would first deactivate all buttons (temporarily remove their onclick, onmouseover etc property) before each render frame. During rendering, the renderer would then activate the required buttons for that frame. The problem is that part of the UI requires a variable number of buttons, and their onclick, onmouseover etc properties change frequently. An example is a buffs system. The UI will list all buffs as square sprites for the current unit selected, and mousing over each square will bring up a tooltip with some information on the buff. But the number of buffs is variable thus I won't know how many buttons to create at the start. What's the best way to solve this problem? P.S. My game is in Javascript, and I know I can use HTML buttons, but would like to make my game purely Canvas-based. Create buttons on-the-fly during rendering. Thus I will only have buttons when I require them. After the render frame these buttons would be useless and removed. Create a fixed set of buttons that I'm going to assume the number of buffs per unit won't exceed. During each render frame activate the buttons accordingly and set their onmouseover property. Assign a button to each Buff instance. This sounds wrong as the buff button is a part of the GUI which can only have one unit selected. Assigning a button to every single Buff in the game seems to be overkill. Also, I would need to change the button's position every render frame since its order in the unit's list of buffs matter. Any other solutions? I'm actually quite for idea (1) but am worried about the memory/time issues of creating a new Button() object every render frame. But this is in Javascript where object creation is oh-so-common ({} mainly) due to automatic garbage collection. What is your take on this? Thanks!

    Read the article

  • Are there any reasons to use Bazaar over Hg or Git?

    - by NeuronQ
    The world of DVCSs seems split between Git and Mercurial nowadays, but lots of projects and places (like my new employer) use Bazaar. And it's not a thing of inertia where people just use something because "that's how it's always been done", these guys are agile and sometimes seem to embrace change just for the fun of having more things to fix. Yet no one gave me any convincing arguments for using Bzr over Hg or Git. I can get seeing Git as "too complicated" but you can't use this king of judgement between Hg and Bzr. So then, what are the features of Bazaar that would justify its use over Mercurial (or Git) in any given situation?

    Read the article

  • In Subversion, how should I set up a new major version of my application?

    - by Steve McLeod
    I'm about to start work on a new version (version 4) of my commercial application. I use Subversion. Based on your experiences, mistakes, and successes, how would you recommend I set up the new version in Subversion? Here's some info: I intend to keep releasing critical updates in version 3 for some time after version 4 is released. However all development of new features will be solely in version 4. In case it is relevant: I'm a solo developer on this product, and that is likely to remain the case.

    Read the article

  • git workflow for separating commits

    - by gman
    Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all. For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user "You need to upgrade". When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just git add . && git commit -a the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the question Are there workflows that work for you or that make this process easier? I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up. I know I can git add --interactive etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit. Thoughts? Suggestions? PS: I hope this is an appropriate question. The help says development methodologies and processes

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Best practices for launching a new software version

    - by steve
    I rebuilt a web app to replace a version that we have been using for the last 3-4 years. We have a few thousand clients and a few hundred active users per day. The functionality is basically the same. The new version is a little bit faster with a few enhancement features and there are a lot of behind the scenes changes that the clients will never see. The UI is quite different but ultimately much easier to use and navigate. How should I go about having our clients stop using the old system and start using the new one? I am currently putting together a video that will play on the web site as well as within the app. The video will go through the pages and focus on some key changes. I was also thinking about an intro page that will display once the user logs in and explains some of the features.

    Read the article

  • Tracking work history in a git repo

    - by Code-Guru
    Previous related questions: Code bases for desktop and mobile versions of the same app Git branching and tagging best practices Question: I have split my repo into three directories (swing, android, and common) as suggested by @KarlBielefeldt in response to my previous question. Now I am jumping back and forth between developing my Android port and tweaking/adding features to my original Swing app. All of my commits are linear (fast-forward) and only my commit messages give hints indicating whether I'm working on my Swing app or my Android app. Is there a better way to keep track of the work flow in my git repo?

    Read the article

  • Data-driven animations

    - by saadtaame
    Say you are using C/SDL for a 2D game project. It's often the case that people use a structure to represent a frame in an animation. The struct consists of an image and how much time the frame is supposed to be visible. Is this data sufficient to represent somewhat complex animation? Is it a good idea to separate animation management code and animation data? Can somebody provide a link to animations tutorials that store animations in a file and retrieve them when needed. I read this in a book (AI game programming wisdom) but would like to see a real implementation.

    Read the article

  • Ad-hoc reporting similar to Microstrategy/Pentaho - is OLAP really the only choice (is OLAP even sufficient)?

    - by TheBeefMightBeTough
    So I'm getting ready to develop an API in Java that will provide all dimensions, metrics, hierarchies, etc to a user such that they can pick and choose what they want (say, e.g., dimensions of Location (a store) and Weekly, and the metric Product Sales $), provide their choices to the api, and have it spit out an object that contains the answer to their question (the object would probably be a set of cells). I don't even believe there will be much drill up/down. The data warehouse the APIwill interface with is in a standard form (FACT tables, dimensions, star schema format). My question is, is an OLAP framework such as Mondrian the only way to achieve something akin to ad-hoc reporting? I can envisage a really large Cube (or VirtualCube) that contains most of the dimensions and metrics the user could ever want, which would give the illusion of ad-hoc reporting. The problem is that there is a ton of setup to do (so much XML) to get the framework to work with the data. Further it requires specific knowledge, such as MDX, and even moreso learning the framework peculiars (Mondrian API). Finally, I am not positive it will scale much better than simply making queries against a SQL database. OLAP to me feels like very old technology. Is performance really an issue anymore? The alternative I can think of would be dynamic SQL. If the existing tables in the data warehouse conform to a naming scheme (FACT_, DIM_, etc), or if a very simple config file/ database table containing config information existed that stored which tables are fact tables, which are dimensions, and what metrics are available, then couldn't the api read from that and assembly the appropriate sql query? Would this necessarily be harder than learning MDX, Mondrian (or another OLAP framework), and creating all the cubes? In general, I feel that OLAP is at the same time too powerful (supports drill up/down, complex functions) and outdated and am reluctant to base my architecture on it. However, I am unsure if the alternative(s), such as rolling my own ad-hoc reporting framework using dynamic SQL would remove any complexity while still fulfilling requirements, both functional and non-functional (e.g., scalability; some FACT tables have many millions of rows). I also wonder about other techniques (e.g., hive). Has anyone here tried to do ad-hoc reporting? Any advice? I expect this project to take a pretty long time (3 months min, but probably longer), so I just do not want to commit to an architecture without being absolutely sure of its pros and cons. Thanks so much.

    Read the article

  • Git Branch Model for iOS projects with one developer

    - by glenwayguy
    I'm using git for an iOS project, and so far have the following branch model: feature_brach(usually multiple) -> development -> testing -> master Feature-branches are short-lived, just used to add a feature or bug, then merged back in to development and deleted. Development is fairly stable, but not ready for production. Testing is when we have a stable version with enough features for a new update, and we ship to beta testers. Once testing is finished, it can be moved back into development or advanced into master. The problem, however, lies in the fact that we can't instantly deploy. On iOS, it can be several weeks between the time a build is released and when it actually hits users. I always want to have a version of the code that is currently on the market in my repo, but I also have to have a place to keep the current stable code to be sent for release. So: where should I keep stable code where should I keep the code currently on the market and where should I keep the code that is in review with Apple, and will be (hopefully) put on the market soon? Also, this is a one developer team, so collaboration is not totally necessary, but preferred because there may be more members in the future.

    Read the article

  • Using Views to Expose Encrypted Data in SQL Server

    I'm using SQL Server's built-in encryption to hide data in one of my SQL Server databases, but this is a reporting system and my end users need to be able to query the data without having to remember the specialized decryption functions. Is there a way to do this? Yes, there is, via the use of views. New! SQL Prompt 6 – now with tab historyWriting, exploring, and editing SQL just became even more effortless with SQL Prompt 6. Download a free trial.

    Read the article

  • Branching strategy for parallel development that won't be in the same release?

    - by Telastyn
    My team is working on a product, which for business reasons needs to be released on a regular schedule. An issue has arisen where we want to do development in parallel for the upcoming release, as well as the 'next' release. This is to become standard practice, so it's not as straightforward as cutting a feature branch for the new work. We'll continually have 2+ teams working on different releases of the same product. Is there an SCM best practice for this sort of arrangement?

    Read the article

  • What is the canonical approach to using a VCS right from a project's infancy?

    - by Anonymous -
    Background I've used VCS (mainly git) in the past to manage many existing projects and it works great. Typically with an existing project, I would check in each change I make to the code that either optimizes or changes the overall functionality (you know what I mean, in suitable steps, not every single line I change). Problem One thing I've not had so much practise at is creating new projects. I'm in the process of starting a new project of my own that will probably grow quite large, but I'm finding that there is a lot to do and a lot changing in the first few days/hours/weeks/the period up until the product is actually functioning in it's most basic form. Is there any point in me checking in each step of the process as I would with an existing project? I'm not breaking the project with changes I make since it isn't working yet. At the moment I've simply been using VCS as a backup at the end of each day, when I leave the computer. My first few commits were things like "Basic directory structure in place" and "DB tables created". How should I use a VCS when starting a new project?

    Read the article

  • How to restore my D Drive

    - by buggi
    I need help in recovering one of my drives from my windows7 machine. I used to have two drives, c and d. I used D to save all my data. i have been using my laptop from 3years. and when I saw it today, i cant find D drive on my computer :( . When I opened the parititioning wizard it showed me that the space I allocated for D is there but as unallocated space. I badly need the data from D. Can you please suggest if I can use ubuntu to recover D? Thanks in advance.

    Read the article

  • Organizing your Data Access Layer

    - by nighthawk457
    I am using Entity Framework as my ORM in an ASP.Net application. I have my database already created so ended up generating the entity model from it. What is a good way to organize files/classes in the data access layer. My entity framework model is in a class library and I was planning on adding additional classes per Entity(i.e per database table) and putting all the queries related to those tables in their respective classes. I am not sure if this is a right approach and if it is then where do the queries requiring data from multiple tables go? Am I completely wrong in organizing my files based on entities/tables and should I organize them based on functional areas instead.

    Read the article

  • PHP - Data Access Layer

    - by scarpacci
    I am currently reviewing a code base and noticed that a majority of the calls (along with DB connections) are just buried inside the PHP scripts. I would have assumed that like other languages they would have developed some sort of data access layer (Like I would do in .Net or Java) for all of the communication to the DB (or implemented MVC, etc). Is this still a common pattern in PHP or is there alternative methodologies/patterns for this technology? I am just trying to understand why the subs would have developed it this way. Any insight/info on how experienced developers design an approach data access in PHP would be very much appreciated.

    Read the article

  • Can I associate a github gist with an organization?

    - by yc01
    My team has a GitHub organization account. A lot of the work I do results in one-off scripts that we want to be able to have on our organization page, but that aren't big enough projects to justify their own repository. Is there any way to associates Gists with GitHub organization accounts? If not, what's the best way to 'check-in' or associate smaller scripts into Github's shared organizational repository?

    Read the article

< Previous Page | 176 177 178 179 180 181 182 183 184 185 186 187  | Next Page >