Search Results

Search found 21664 results on 867 pages for 'process innovation'.

Page 511/867 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • SQL Server 2008 R2 Replication log reader could not execute sp_replcmds

    - by user49352
    This log reader agent worked perfectly for several months until the user referenced in the error was removed from the domain. After that time the error 'The process could not execute 'sp_replcmds' on 'SERVER'' was received with more detail 'Could not obtain information about Windows NT group/user' that referenced said user. This user was referenced nowhere in the the log reader agent other than the Publication Access List from which it was subsequenctly removed. The agent would still not successfully start up. The simple problem here, I believe, is that the log reader agent was created under that user and that no longer exists in the domain. Is there an 'owner' somewhere that needs to be changed? Every other function on the database continues to execute successfully. Any other help or thought would be appreciated.

    Read the article

  • fdisk -l only displays boot partition

    - by Franklin
    I have a SAN, and it's able to read and write to the 50TB RAID just fine, but when I run fdisk -l it only lists the boot partition of the SAN server, and doesn't display anything about the other partitions on the RAID. I've also tried using parted -l with the same result. Now when I type mount it shows that the partitions are mounted just fine. I've never seen this happen. The box is running Openfiler 2.3 (I know it's old, we're in the process of upgrading all our old equipment). We have another SAN that's configured almost identically, and it's able to display the partition info with either of the two commands I mentioned above.

    Read the article

  • my sweet old VHS collection

    - by microspino
    Which is the best procedure and digital format to resurrect my old VHS library i a way I can see It on my LCD TV? I have a not so big collection 100 VHS I have plenty of storage I have a network media tank (A110 popcorn Hour but I can also purchase a new media center if needed) I have an old working VCR (but again I can pick a specific one new if you think It's better to save quality) The VHS cassette collection seems to have retained a good quality over the years. Of course I have some computer (either mac and pc) to do the process. Which software do I need/miss? Please give me some advice.

    Read the article

  • Is a big name computer science degree worth the cost?

    - by Serplat
    I'm currently in High School and trying to look into what I want to do after I graduate. I know that I will be going to college, and that I want a degree in Computer Science, however, I'm not entirely sure where I want to go (I haven't started the application process yet). I already have built up a decent amount of experience in programming (over the summers I have been hired to program at a local university), and I'm pretty capable of teaching myself most of the material I've come across through either books or web documentation. I'm interested in whether it is worth it to get a degree from a major, big-name computer science university for $50,000 each year, as opposed to going to a local state school for only $20,000. For my Bachelor's degree alone, this would be $120,000 more than the state school. I've also heard that where you get your Bachelor's doesn't matter much if you plan to get a Master's degree. Many people recommend going somewhere like a state school for your Bachelor's, and then try to get into a more major school for your Master's. Has anybody found any truth in this? Basically, is going to a big name computer science school for a Bachelor's degree really worth the added expense?

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks

    Read the article

  • Mac OSX snow leapord move to folder on keystroke

    - by Georges Oates Larsen
    On a weekly basis I have to organize thousands of photos (in groups of up to five thousand) into folders depending upon what they contain (to then narrow them down to the best photos of the same thing). This means I am constantly scanning through photos and organizing them into a folder. THe problem is, the process of stopping my scan, then dragging the photo all the way into a folder myself is bogging me down. Would it be possible to, for instance using something like applescript, or even going so far as using XCode/Cocoa, to create a shortcut that moves whatever I have selected in the finder to a pre-specified folder? Does somethign like this already exist?

    Read the article

  • Is the development of CLI apps considered "backwards"?

    - by user61852
    I am a DBA fledgling with a lot of experience in programming. I have developed several CLI, non interactive apps that solve some daily repetitive tasks or eliminate the human error from more complex albeit not so daily tasks. These tools are now part of our tool box. I find CLI apps are great because you can include them in an automated workflow. Also the Unix philosophy of doing a single thing but doing it well, and letting the output of a process be the input of another, is a great way of building a set of tools than would consolidate into an strategic advantage. My boss recently commented that developing CLI tools is "backwards", or constitutes a "regression". I told him I disagreed, because most CLI tools that exist now are not legacy but are live projects with improved versions being released all the time. Is this kind of development considered "backwards" in the market? Does it look bad on a rèsumè? I also considered all solutions whether they are web or desktop, should have command line, non-interactive options. Some people consider this a waste of programming resources. Is this goal a worthy one in a software project?

    Read the article

  • canvas tile grid, hover effects, single tilesheet, etc

    - by user121730
    I'm currently in the process of building both the client and server side of an html5, canvas, and WebSocket game. This is what I have thus far for the client: http://jsfiddle.net/dDmTf/7/ Current obstacles The hover effect has no idea what to put back after the mouse leaves. Currently it's just drawing a "void" tile, but I can't figure out how to redraw a single tile without redrawing the whole map. How would I go about storing multiple layers within the map variable? I was considering just using a multi-dimensional array for each layer (similar to what you see as the current array), and just iterating through it, but is that really an efficient way of doing it? Side note The tile sheet being used for the jsfiddle display is only for development. I'll be replacing it as things progress in the engine. I hope you can help! Hopefully you guys can help me, I've been struggling to get through things, since I'm learning how things kind of stuff works as I go. If you guys have any pointers for my JavaScript, feel free. As I'm more or less learning advanced usage as I go, I'm sure I'm doing plenty of things wrong. Note: I will continue to update this post as the engine improves, but updating the jsfiddle link and updating the obstacles list by striking things that have been solved, or adding additions. Thanks!

    Read the article

  • Memory compatibility?

    - by nvillec
    I'm in the process of building a PC intended mostly for gaming and I've got a few questions. Currently, I only have the motherboard and CPU: Motherboard: GIGABYTE GA-Z68AP-D3 CPU: Intel Core i3-2120 My main question is about memory compatibility. I have 4 Hynix 2GB HMT125U7BFR8C-G7 with light-moderate use laying around and I'd love to save a few bucks if I can. I've read that this is server memory... a) Will that be a problem for PC use? b) Is it compatible with the motherboard? I've emailed Hynix and checked Crucial to no avail. If incompatible, what memory would be a good fit given the components I have? The motherboard has 4 sockets and supports up to 32GB, but I don't know that I have the budget for that at the moment. Thanks!!

    Read the article

  • Naming a class that processes orders

    - by p.campbell
    I'm in the midst of refactoring a project. I've recently read Clean Code, and want to heed some of the advice within, with particular interest in Single Responsibility Principle (SRP). Currently, there's a class called OrderProcessor in the context of a manufacturing product order system. This class is currently performs the following routine every n minutes: check database for newly submitted + unprocessed orders (via a Data Layer class already, phew!) gather all the details of the orders mark them as in-process iterate through each to: perform some integrity checking call a web service on a 3rd party system to place the order check status return value of the web service for success/fail email somebody if web service returns fail constantly log to a text file on each operation or possible fail point I've started by breaking out this class into new classes like: OrderService - poor name. This is the one that wakes up every n minutes OrderGatherer - calls the DL to get the order from the database OrderIterator (? seems too forced or poorly named) - OrderPlacer - calls web service to place the order EmailSender Logger I'm struggling to find good names for each class, and implementing SRP in a reasonable way. How could this class be separated into new class with discrete responsibilities?

    Read the article

  • Why does Windows share permissions change file permissions?

    - by Andrew Rump
    When you create a (file system) share (on windows 2008R2) with access for specific users does it changes the access rights to the files to match the access rights to the share? We just killed our intranet web site when sharing the INetPub folder (to a few specified users). It removed the file access rights for authenticated users, i.e., the user could not log in using single signon (using IE & AD)! Could someone please tell me why it behaves like this? We now have to reapply the access right every time we change the users in the share killing the site in the process every time!

    Read the article

  • Allow READ access to local folders in 2003SBS AD

    - by Dan M.
    Have a SBS2003 client with a mess of a domain that is in process of being cleaned. But, for the life of me I cannot find a setting that will allow write access to the local hard disk for domain users with redirected profiles(to the server). This is needed only for one program that will not follow a symbolic link to the network path, instead it seems to be hard coded to the %appdata% folder but only on the c: drive.... So question is how can I allow "Domain users" write access to the local %appdata% directory? I have tried setting it manually on a machine but it kept resetting to RO no matter how many times I tried. Everytime I would uncheck the RO property it would reset sometime right after i hit OK. Thanks in advance! Dan

    Read the article

  • .NET Dependency Management Systems

    - by StriplingWarrior
    I have some .NET projects that are starting to get large enough to merit looking into Dependency Management solutions, so we don't have to copy binaries from one project to another. Here's what I've found so far: NPanday is based on a port of Maven. I can't tell how recently it was worked on, but the last release was in May 2011. NuGet seems to be under active development, and it appears to have support directly from Microsoft. Some people complained that it "only addresses dependency resolution," but I don't know what else it should address, or whether it has added more features since that point. It does appear to have recently added the ability to import binaries as part of the build process so we don't have to commit them to our repositories. Refix appears to still be in Beta, after having received no attention since Sept 2011. Would somebody with recent experience using any of these dependency management tools (or any others that work well) share your experience? Is NuGet mature enough to use it for dependency management? If not, what does it lack?

    Read the article

  • Reference Data Management

    - by rahulkamath
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableColorfulListAccent2 {mso-style-name:"Colorful List - Accent 2"; mso-tstyle-rowband-size:1; mso-tstyle-colband-size:1; mso-style-priority:72; mso-style-unhide:no; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-tstyle-shading:#F8EDED; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:25; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; color:black; mso-themecolor:text1;} table.MsoTableColorfulListAccent2FirstRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#9E3A38; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themeshade:204; mso-tstyle-border-bottom:1.5pt solid white; mso-tstyle-border-bottom-themecolor:background1; color:white; mso-themecolor:background1; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:white; mso-tstyle-shading-themecolor:background1; mso-tstyle-border-top:1.5pt solid black; mso-tstyle-border-top-themecolor:text1; color:#9E3A38; mso-themecolor:accent2; mso-themeshade:204; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2FirstCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:first-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2LastCol {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:last-column; mso-style-priority:72; mso-style-unhide:no; mso-ansi-font-weight:bold; mso-bidi-font-weight:bold;} table.MsoTableColorfulListAccent2OddColumn {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-column; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#EFD3D2; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:63; mso-tstyle-border-top:cell-none; mso-tstyle-border-left:cell-none; mso-tstyle-border-bottom:cell-none; mso-tstyle-border-right:cell-none; mso-tstyle-border-insideh:cell-none; mso-tstyle-border-insidev:cell-none;} table.MsoTableColorfulListAccent2OddRow {mso-style-name:"Colorful List - Accent 2"; mso-table-condition:odd-row; mso-style-priority:72; mso-style-unhide:no; mso-tstyle-shading:#F2DBDB; mso-tstyle-shading-themecolor:accent2; mso-tstyle-shading-themetint:51;} Reference Data Management Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise MDM solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or mastering sales territories in light of rapid fire acquisitions that require frequent sales territory refinement, equitable distribution of leads and accounts to salespersons, and alignment of budget/forecast with results to optimize sales coverage. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? Reference data is a close cousin of master data. While master data may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and give them contextual value. The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Specialty Finance: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change.

    Read the article

  • Languages on a resume: Is it better to put "C/C++" or "C, C++"?

    - by Kevin
    I'm graduating in a couple of weeks, and my resume (as expected) lists the languages that I've had experience with. Previously I've put "C/C++", however back then I didn't have that much experience with these two languages as I do now. Now that I've formally learned these two languages, it has become evident to me (and anyone who really knows these languages) that they are similar, and completely disimilar at the same time. Sure, most C code is compilable C++ code, but syntax and incorporation of library functions is pretty much where these similarities end. In most non-trivial problems, chances are that the desirable C++ solution will be different from the desirable C solution. My question: Will recruiters take note or care about whether you put "C/C++" as opposed to "C, C++"? Will they assume a lack of knowledge of the workings of either because of the inclusion of the first form, or perhaps see the inclusion of the second form as a potential "resume beefer" (listing them as 2 languages, instead of "one")? Furthermore, for jobs that you've applied to that were particularly interested in these two langauges, did the interview process include questions about the differences between C programming and C++ programming (so, about actual programming techniques, not only the extra paradigms in the latter)?

    Read the article

  • Bizarre SSH Problem - It won't even start

    - by thallium85
    I recently got Ubuntu 12.04 Precise, got it up and running with some MediaWiki software, static IP on the box and router and was able to access the main page even from a cell phone. Everything seemed great... Then I wanted to finally get rid of the monitor and keyboard and login remotely via SSH. I installed openssh-server, let everything point to port 22 for a test run and installed putty on my Windows XP machine. I got a connection refused. Went back and started checking the Ubuntu install itself... (I'm under root from this point on) $ sudo -s $ service ssh status ssh stop/waiting $ service ssh start ssh start/running, process 2212 $ service ssh status ssh stop/waiting Apparently ssh has stopped or is waiting for something.... $ ssh localhost ssh: connect to host localhost port 22: Connection refused I can't even connect to myself... I checked ufw (firewall) to see if port 22 is doing alright... $ sudo ufw status Status: active To Action From 22 ALLOW Anywhere 22/tcp ALLOW Anywhere 22 ALLOW Anywhere (v6) 22/tcp ALLOW Anywhere (v6) sshd_config shows only Port 22 Is ssh not using the right IP address at all? I just don't get what I did wrong here. When this is up and running I will def change the port number, but for now, I don't want to mess with the default install too much until a test run with putty is successful. Edit: Here are my sshd_config file and my ssh_config file. The command /usr/sbin/sshd -p 22 -D -d -e returns: /etc/ssh/sshd_config line 159: Subsystem 'sftp' already defined. Edit: @phoibus moving the sshd_config file and reinstalling did the trick! service ssh status the above command shows that ssh is now running and I am now able to log in from my windows xp computer remotely via putty. Thanks so much! I can now use my monitor for other things!

    Read the article

  • CentOS and Broadcom Wireless Drivers

    - by Sami
    Hi, I'm tired of reading, tired of trying to find a solution myself, that's why I decided to post my question. I'm following tutorial located at CentOS Wiki to install driver for my wifi device. However, I'm facing strange error at the begining of the process. make -C /lib/modules/`uname -r`/build/ M=`pwd` make: *** /lib/modules/2.6.18-194.11.3.el5PAE/build/: File not found Does anyone know what I'm doing wrong? This is first time when I try to install Linux on my laptop.

    Read the article

  • How to fix /etc/ folder on Mac OS X

    - by justinhj
    I was following a tutorial which had this command to create a launchd.conf file in /etc/ sudo echo "some command" /etc/launchd.conf But it wouldn't work, I got permission denied after entering my admin password. So it seemed like the permissions for the link were wrong, so I did 'sudo chmod 755 /etc/' But now I can't load a terminal, I get the error The administrator has set your shell to an illegal value If I tried to sudo a command now I get sudo: can't open /private/etc/sudoers: Permission denied sudo: no valid sudoers sources found, quitting Process tramp/sudo root@localhost exited abnormally with code 1 This is what the link /etc looks like, what should it look like, and how do I restore it? lrwxr-xr-x 1 root wheel 11 Jul 21 2011 etc - private/etc /private/etc ... drw-r--r-- 111 root wheel 3774 Mar 26 02:25 etc edit: I'm using Mac OS X 10.7.3

    Read the article

  • TDD with SQL and data manipulation functions

    - by Xophmeister
    While I'm a professional programmer, I've never been formally trained in software engineering. As I'm frequently visiting here and SO, I've noticed a trend for writing unit tests whenever possible and, as my software gets more complex and sophisticated, I see automated testing as a good idea in aiding debugging. However, most of my work involves writing complex SQL and then processing the output in some way. How would you write a test to ensure your SQL was returning the correct data, for example? Then, say if the data wasn't under your control (e.g., that of a 3rd party system), how can you efficiently test your processing routines without having to hand write reams of dummy data? The best solution I can think of is making views of the data that, together, cover most cases. I can then join those views with my SQL to see if it's returning the correct records and manually process the views to see if my functions, etc. are doing what they're supposed to. Still, it seems excessive and flakey; particularly finding data to test against...

    Read the article

  • Network trials and tribulations

    - by MauiWowie
    Hi, I am in the process of setting up a small (two PC's, both running Windows 7) network using a D-link DI-604 router (which acts only as hub, it's not set up in any way). In the Network and 'Sharing Center' in 'Control Panel' I clicked 'Set up a New (Connection or) Network'. All went well up until the moment I attempted to connect one PC with the other (in the 'Connect' dialog I entered the other PC's IP, no pwd). I can use both PC's to connect to and browse the net though, so the router/hub does not seem to be the problem. And I must have done something right, because the other computer shows up in the 'Network Map'. Any and all help is much appreciated!

    Read the article

  • Upgrade went wrong, laptop essentially 'bricked'

    - by hexagonheat
    I have an old netbook I was trying to upgrade from 10.04 to 10.10. Ubuntu was in the process of upgrading when everything completely froze. I left it sit for an hour but it would not respond to anything. So I powered down the machine and it didn't have the necessary files to run Ubuntu. I went to the terminal and it told me to put in some command that I can not remember to 'rebuild' something. That takes me to now, when I turn on the laptop it comes up with a screen "GNU GRUB version 1.98+20100804-5ubuntu3.3" and has a bunch of options such as: 1. Ubuntu, with Linux 2.6.35-32-generic 2. Ubuntu, with Linux 2.6.35-32-generic (recover mode) etc. (there are like 15 of these with different numbers after 2.6.35 and the word 'generic'. It doesn't seem to matter what I pick, it will go to the "Ubuntu" loading screen with the colored dots but then every time it will freeze and I have to reboot to the same thing. I can't seem to get a terminal prompt anywhere either. Any ideas? I can't think of what to do :(

    Read the article

  • The Enterprise Side of JavaFX: Part Two

    - by Janice J. Heiss
    A new article, part of a three-part series, now up on the front page of otn/java, by Java Champion Adam Bien, titled “The Enterprise Side of JavaFX,” shows developers how to implement the LightView UI dashboard with JavaFX 2. Bien explains that “the RESTful back end of the LightView application comes with a rudimentary HTML page that is used to start/stop the monitoring service, set the snapshot interval, and activate/deactivate the GlassFish monitoring capabilities.”He explains that “the configuration view implemented in the org.lightview.view.Browser component is needed only to start or stop the monitoring process or set the monitoring interval.”Bien concludes his article with a general summary of the principles applied:“JavaFX encourages encapsulation without forcing you to build models for each visual component. With the availability of bindable properties, the boundary between the view and the model can be reduced to an expressive set of bindable properties. Wrapping JavaFX components with ordinary Java classes further reduces the complexity. Instead of dealing with low-level JavaFX mechanics all the time, you can build simple components and break down the complexity of the presentation logic into understandable pieces. CSS skinning further helps with the separation of the code that is needed for the implementation of the presentation logic and the visual appearance of the application on the screen. You can adjust significant portions of an application's look and feel directly in CSS files without touching the actual source code.”Check out the article here.

    Read the article

  • What is the computer "doing" when it is running slow and task manager is not showing any CPU activity?

    - by Joakim Tall
    Typical example is when shutting down a memoryintensive application. It can take quite a while before the computer gets back up to speed. Is there some inherent cost in releasing memory? Or is it throttled by some kind of harddrive activity, and if so is there any good way to track that? I usually bring up task manager when a computer is running slow, and usually sorting by cpu activity can show what process is causing the problem, but sometimes there is no activity showing. And yes I "show processes from all users", I have been wondering this since the days win2k :)

    Read the article

  • Merging Two Git Repositories with branches

    - by Joel K
    I realize there's a Stack Overflow question: http://stackoverflow.com/questions/277029/combining-multiple-git-repositories But I haven't found git-stitch-repo to be quite the tool I'm looking for. I also consider this more of a sysadmin task. How do I take code from an external repository and combine it with code from a primary repository while maintaining history/diffs and branches. Use case: An outside development team using SVN has ported to git and now wants to 'merge' their code in to the main company's git repo. I've tried subtree merges, but I lose the history. I've tried git-stitch-repo, but that process results in an entirely new repo that's missing branches. I just want to slot in some outside code as a sub-directory in our current main repo with as little disruption as possible and while maintaining the other project's history. Any success stories out there?

    Read the article

  • Can a MySQL slave be a master at the same time?

    - by mmattax
    I am in the process of migrating 2 DB servers (Master & Slave) to two new DB Servers (Master and Slave) DB1 - Master (production) DB2 - Slave (production) DB3 - New Master DB4 - New Slave Currently I have the replication set up as: DB1 -> DB2 DB3 -> DB4 To get the production data replicated to the new servers, I'd like to get it "daisy chained" so that it looks like this: DB1 -> DB2 -> DB3 -> DB4 Is this possible? When I run show master status; on DB2 (the production slave) the binlog possition never seems to change: +------------------+----------+--------------+------------------+ | File | Position | Binlog_Do_DB | Binlog_Ignore_DB | +------------------+----------+--------------+------------------+ | mysql-bin.000020 | 98 | | | +------------------+----------+--------------+------------------+ I'm a bit confused as to why the binlog position is not changing on DB2, Ideally it will be the master to DB3.

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >