Search Results

Search found 35219 results on 1409 pages for 'without'.

Page 7/1409 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Data Profiling without SSIS

    Strangely enough for a predominantly SSIS blog, this post is all about how to perform data profiling without using SSIS. Whilst the Data Profiling Task is a worthy addition, there are a couple of limitations I’ve encountered of late. The first is that it requires SQL Server 2008, and not everyone is there yet. The second is that it can only target SQL Server 2005 and above. What about older systems, which are the ones that we probably need to investigate the most, or other vendor databases such as Oracle? With these limitations in mind I did some searching to find a quick and easy alternative to help me perform some data profiling for a project I was working on recently. I only had SQL Server 2005 available, and anyway most of my target source systems were Oracle, and of course I had short timescales. I looked at several options. Some never got beyond the download stage, they failed to install or just did not run, and others provided less than I could have produced myself by spending 2 minutes writing some basic SQL queries. In the end I settled on an open source product called DataCleaner. To quote from their website: DataCleaner is an Open Source application for profiling, validating and comparing data. These activities help you administer and monitor your data quality in order to ensure that your data is useful and applicable to your business situation. DataCleaner is the free alternative to software for master data management (MDM) methodologies, data warehousing (DW) projects, statistical research, preparation for extract-transform-load (ETL) activities and more. DataCleaner is developed in Java and licensed under LGPL. As quoted above it claims to support profiling, validating and comparing data, but I didn’t really get past the profiling functions, so won’t comment on the other two. The profiling whilst not prefect certainly saved some time compared to the limited alternatives. The ability to profile heterogeneous data sources is a big advantage over the SSIS option, and I found it overall quite easy to use and performance was good. I could see it struggling at times, but actually for what it does I was impressed. It had some data type niggles with Oracle, and some metrics seem a little strange, although thankfully they were easy to augment with some SQL queries to ensure a consistent picture. The report export options didn’t do it for me, but copy and paste with a bit of Excel magic was sufficient. One initial point for me personally is that I have had limited exposure to things of the Java persuasion and whilst I normally get by fine, sometimes the simplest things can throw me. For example installing a JDBC driver, why do I have to copy files to make it all work, has nobody ever heard of an MSI? In case there are other people out there like me who have become totally indoctrinated with the Microsoft software paradigm, I’ve written a quick start guide that details every step required. Steps 1- 5 are the key ones, the rest is really an excuse for some screenshots to show you the tool. Quick Start Guide Step 1  - Download Data Cleaner. The Microsoft Windows zipped exe option, and I chose the latest stable build, currently DataCleaner 1.5.3 (final). Extract the files to a suitable location. Step 2 - Download Java. If you try and run datacleaner.exe without Java it will warn you, and then open your default browser and take you to the Java download site. Follow the installation instructions from there, normally just click Download Java a couple of times and you’re done. Step 3 - Download Microsoft SQL Server JDBC Driver. You may have SQL Server installed, but you won’t have a JDBC driver. Version 3.0 is the latest as of April 2010. There is no real installer, we are in the Java world here, but run the exe you downloaded to extract the files. The default Unzip to folder is not much help, so try a fully qualified path such as C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\ to ensure you can find the files afterwards. Step 4 - If you wish to use Windows Authentication to connect to your SQL Server then first we need to copy a file so that Data Cleaner can find it. Browse to the JDBC extract location from Step 3 and drill down to the file sqljdbc_auth.dll. You will have to choose the correct directory for your processor architecture. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\auth\x86\sqljdbc_auth.dll. Now copy this file to the Data Cleaner extract folder you chose in Step 1. An alternative method is to edit datacleaner.cmd in the data cleaner extract folder as detailed in this data cleaner wiki topic, but I find copying the file simpler. Step 5 – Now lets run Data Cleaner, just run datacleaner.exe from the extract folder you chose in Step 1. Step 6 – Complete or skip the registration screen, and ignore the task window for now. In the main window click settings. Step 7 – In the Settings dialog, select the Database drivers tab, then click Register database driver and select the Local JAR file option. Step 8 – Browse to the JDBC driver extract location from Step 3 and drill down to select sqljdbc4.jar. e.g. C:\Program Files\Microsoft SQL Server JDBC Driver 3.0\sqljdbc_3.0\enu\sqljdbc4.jar Step 9 – Select the Database driver class as com.microsoft.sqlserver.jdbc.SQLServerDriver, and then click the Test and Save database driver button. Step 10 - You should be back at the Settings dialog with a the list of drivers that includes SQL Server. Just click Save Settings to persist all your hard work. Step 11 – Now we can start to profile some data. In the main Data Cleaner window click New Task, and then Profile from the task window. Step 12 – In the Profile window click Open Database Step 13 – Now choose the SQL Server connection string option. Selecting a connection string gives us a template like jdbc:sqlserver://<hostname>:1433;databaseName=<database>, but obviously it requires some details to be entered for example  jdbc:sqlserver://localhost:1433;databaseName=SQLBits. This will connect to the database called SQLBits on my local machine. The port may also have to be changed if using such as when you have a multiple instances of SQL Server running. If using SQL Server Authentication enter a username and password as required and then click Connect to database. You can use Window Authentication, just add integratedSecurity=true to the end of your connection string. e.g jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true.  If you didn’t complete Step 4 above you will need to do so now and restart Data Cleaner before it will work. Manually setting the connection string is fine, but creating a named connection makes more sense if you will be spending any length of time profiling a specific database. As highlighted in the left-hand screen-shot, at the bottom of the dialog it includes partial instructions on how to create named connections. In the folder shown C:\Users\<Username>\.datacleaner\1.5.3, open the datacleaner-config.xml file in your editor of choice add your own details. You’ll see a sample connection in the file already, just add yours following the same pattern. e.g. <!-- Darren's Named Connections --> <bean class="dk.eobjects.datacleaner.gui.model.NamedConnection"> <property name="name" value="SQLBits Local Connection" /> <property name="driverClass" value="com.microsoft.sqlserver.jdbc.SQLServerDriver" /> <property name="connectionString" value="jdbc:sqlserver://localhost:1433;databaseName=SQLBits;integratedSecurity=true" /> <property name="tableTypes"> <list> <value>TABLE</value> <value>VIEW</value> </list> </property> </bean> Step 14 – Once back at the Profile window, you should now see your schemas, tables and/or views listed down the left hand side. Browse this tree and double-click a table to select it for profiling. You can then click Add profile, and choose some profiling options, before finally clicking Run profiling. You can see below a sample output for three of the most common profiles, click the image for full size.   I hope this has given you a taster for DataCleaner, and should help you get up and running pretty quickly.

    Read the article

  • How to learn ASP.NET MVC without learning ASP.NET Web forms

    - by Naif
    First of all, I am not a web developer but I can say that I understand in general the difference between PHP, ASP.NET, etc. I have played a little with ASP.NET and C# as well, however, I didn't continue the learning path. Now I'd like to learn ASP.NET MVC but there is no a book for a beginner in ASP.NET MVC so I had a look at the tutorials but it seems that I need to learn C# first and SQL Server and HTML, am I right? So please tell me how can I learn ASP.NET MVC directly (I mean without learning ASP.NET Web forms). What do I need to learn (You can assume that I am an absolute beginner). Update: It is true that i can find ASP.NET MVC tutorial that explain ASP.NET MVC, but I used to find ASP.NET web forms books that explain SQL and C# at the same time and take you step by step. In ASP.NET MVC I don't know how can I start! How can I learn SQL in its own and C# in its own and then combine them with ASP.NET MVC!

    Read the article

  • Fix corrupt NTFS partition without Windows

    - by Capt.Nemo
    MY NTFS Partition has gotten corrupt somehow (it's a relic from the days when I had Windows installed). I'm putting the debug output of fdisk and blkid here. At the same time, any OS is unable to mount my root partition, which is located next to my NTFS partition. I'm not sure if this has anything to do with it, though. I get the following error while trying to mount my root partition (sda5) mount: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ dmesg | tail [ 1019.726530] Descriptor sense data with sense descriptors (in hex): [ 1019.726533] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1019.726551] 1a 3e ed 92 [ 1019.726558] sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1019.726568] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 1a 3e ed 40 00 01 00 00 [ 1019.726584] end_request: I/O error, dev sda, sector 440331666 [ 1019.726602] JBD: Failed to read block at offset 462 [ 1019.726609] ata1: EH complete [ 1019.726612] JBD: recovery failed [ 1019.726617] EXT4-fs (sda5): error loading journal When I open gparted (using live CD), I get an exclamation next to my NTFS drive which states Is there a way to run chkdsk without using windows ? My attempt to run fsck results in the following : ubuntu@ubuntu:~$ sudo fsck /dev/sda fsck from util-linux-ng 2.17.2 e2fsck 1.41.14 (22-Dec-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Update : I was able to fix the NTFS partition running chkdsk off HBCD, but it seems that the superblock problem still remains. *Update 2: * Fixed superblock issue using e2fsck -c /dev/sda5

    Read the article

  • How to install OpenCV without nVidia drivers

    - by Subhamoy Sengupta
    I have a laptop with on-board Intel graphics. I have been using OpenCV for years with this machine and I have managed to avoid manual compilation so far. But in Ubuntu 13.10, when I try to install libopencv-dev from the repositories, it brings along libopencv-ocl, which seems to be dependent on nvidia drivers. Letting the driver install messes up my xserver completely and when I do glxinfo afterwards, I get this: name of display: :0.0 Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Error: couldn't find RGB GLX visual or fbconfig Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". Xlib: extension "GLX" missing on display ":0.0". To solve this, I purge all nVidia drivers and reinstall xserver, much like it has been suggested here, and when I purge the nvidia drivers, OpenCV development libraries are also removed, as apt-get tells me they are no longer needed. This is foreign to me, because I expected a warning that I have installed packages that depend on this, but how can removing a dependency automatically remove the package I installed without warnings or asking? I understand it has something to do with nVidia being the provider of the libopencv-ocl in the repo. How could I get around it? I would rather not compile OpenCV if I can help it. I have seen similar questions, but not a suitable answer.

    Read the article

  • Releasing an open source project without getting embarrassed

    - by Hopeful
    I've been working by myself on a fairly large open source project for quite a while and it's nearing the point where I'd like to release it. However, I'm self-taught and I don't really know anyone who could adequately review my project. A few years ago, I had released a small bit of code which pretty much got ripped apart (in a critical sense) on the forum where I released it. Even though the code worked, the criticism was accurate but brutal. It prompted me to begin searching for best practices for everything and in the end I feel that it made me a much better developer. I've gone over everything in my project so many times trying to make it perfect that I've lost count. I believe in my project and think it has the potential to help a lot of people and I feel like I've done some cool things in interesting ways with it. Still, because I'm self-taught, I can't help but wonder what gaps exist in my self-education. The way my code was ripped apart last time isn't something I'd like to repeat. I think my two biggest fears with releasing my project that I've poured countless hours into are being absolutely embarrassed because I missed some patently obvious things because of my self-education or, worse, releasing it to the sound of crickets. Is there anyone who has been in a similar situation? I'm not afraid of constructive criticism, so long as it is constructive and not just a rant on how I screwed up. I know there is a code review site on StackExchange, but it's not really set up for large projects and I didn't feel like the community there is large enough yet to get good feedback if I were to post parts of my project piecemeal (I tried with one file). What can I do to give my project at least some measure of success without getting embarrassed or devestated in the process?

    Read the article

  • Cloud Without Compromise – Oracle Fusion HCM

    - by Jay Richey, HCM Product Marketing
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} We’ve all heard about the cloud, and many HR organizations have already launched cloud initiatives. But too many cloud HCM vendors can’t deliver on their promise to lower costs, reduce risk and improve efficiency. When only 5% of CEOs are satisfied with HR*, something needs to change. Only Oracle delivers the promise of the cloud in deployment models tailored to your needs – giving you cloud without compromise. Oracle Fusion HCM provides a unified system with all the analytics and reporting tools you need. Join us for an engaging and insightful webcast this Wednesday, November 16th, at 9am Pacific to learn more about how Oracle Fusion HCM can fulfill your promise. http://www.oracle.com/us/dm/sev100018463-wwmk11040178mpp002-521274.html

    Read the article

  • Monitor not detected after booting without monitor attached (12.04)

    - by cawkie
    I had a stable 12.04 machine running perfectly. The machine was booted without the monitor connected - since then the system always boots to low graphics mode. Onboard graphics (from lspci): VGA compatible controller: Intel Corporation 4 Series Chipset Integrated Graphics Controller (rev 03) Monitor: AOC e2450Swh Display widget shows monitor as laptop(!?) and system details shows graphics as Gallium 0.4 on llvmpipe (LLVM 0x300) X-server log appears to show correct monitor detected. When I boot from a live CD I get full 3d graphics I've tried the monitor on a different machine - all OK. I've tried a different monitor on this machine - same problem. Between having a working system and a broken one there have been no updates and I have made no configuration changes... EDIT: I have come to the conclusion that the problem is caused by a known issue with lightDM hanging on battery check. I've managed to get 3D graphic working by switching to using GDM - not a solution but acceptable workaround. I would still like to know what is causing the problem and how I managed to get my system into this state!

    Read the article

  • .htaccess do not work without index.php on CodeIgniter

    - by Mattia
    I have read a lot of topic with the same problem but I do not find the solution. I have a LAMP into Ubuntu server. My document root is /home/utente/ into this dir I have another dir (turni) with a CodeIgniter web app. The web app works fine with the index.php into the URL, but I want to eliminate it. I have this configuration: config.php into CodeIgniter: $config['index_page'] = ''; .htaccess: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_URI} ^system.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_URI} ^application.* RewriteRule ^(.*)$ /index.php?/$1 [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ index.php?/$1 [L] /etc/apache2/sites-available/default: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /home/utente <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/utente/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog ${APACHE_LOG_DIR}/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> When I open a link of the web app without index.php into the URL, the server show me this error: The requested URL /turni/auth/login was not found on this server. Why? If I put the index.php like /turni/index.php/auth/login all works fine.

    Read the article

  • There is No Scrum without Agile

    - by John K. Hines
    It's been interesting for me to dive a little deeper into Scrum after realizing how fragile its adoption can be.  I've been particularly impressed with James Shore's essay "Kaizen and Kaikaku" and the Net Objectives post "There are Better Alternatives to Scrum" by Alan Shalloway.  The bottom line: You can't execute Scrum well without being Agile. Personally, I'm the rare developer who has an interest in project management.  I think the methodology to deliver software is interesting, and that there are many roles whose job exists to make software development easier.  As a project lead I've seen Scrum deliver for disciplined, highly motivated teams with solid engineering practices.  It definitely made my job an order of magnitude easier.  As a developer I've experienced huge rewards from having a well-defined pipeline of tasks that were consistently delivered with high quality in short iterations.  In both of these cases Scrum was an addition to a fundamentally solid process and a huge benefit to the team. The question I'm now facing is how Scrum fits into organizations withot solid engineering practices.  The trend that concerns me is one of Scrum being mandated as the single development process across teams where it may not apply.  And we have to realize that Scurm itself isn't even a development process.  This is what worries me the most - the assumption that Scrum on its own increases developer efficiency when it is essentially an exercise in project management. Jim's essay quotes Tobias Mayer writing, "Scrum is a framework for surfacing organizational dysfunction."  I'm unsure whether a Vice President of Software Development wants to hear that, reality nonwithstanding.  Our Scrum adoption has surfaced a great deal of dysfunction, but I feel the original assumption was that we would experience increased efficiency.  It's starting to feel like a blended approach - Agile/XP techniques for developers, Scrum for project managers - may be a better fit.  Or at least, a better way of framing the conversation. The blended approach. Technorati tags: Agile Scrum

    Read the article

  • Trouble installing Pokerstars on a Live USB without Persistence through WINE

    - by Ricky Foster
    I need to install any form of Texas Hold Em' on a Lubuntu Live USB that doesn't have persistence. I was able to download PokerStars.net by emulating the .exe (a windows type file) using WINE for Linux (Lubuntu). But, when I try to install, I have no room. The only place on the Live USB is in the root folder which is set to read-only. Is there any way I can change the read only properties of the Live USB while it's in use? So, to recap. I am running Lubuntu 13.04 and can't start in Persistent mode. When I start normally everything worked fine. I proceeded to Chromium and successfully downloaded Wine and the Pokerstars.exe. I right clicked the downloaded fiel then clicked Wine, the installer loaded fine. There are about 8 different disk icons and only the one containing system files is active. Is there any way I can use the terminal to install it to Root. Thanks in advance for your answer/alternate method (without having to buy another USB to install it to).

    Read the article

  • Augmenting functionality of subclasses without code duplication in C++

    - by Rob W
    I have to add common functionality to some classes that share the same superclass, preferably without bloating the superclass. The simplified inheritance chain looks like this: Element -> HTMLElement -> HTMLAnchorElement Element -> SVGElement -> SVGAlement The default doSomething() method on Element is no-op by default, but there are some subclasses that need an actual implementation that requires some extra overridden methods and instance members. I cannot put a full implementation of doSomething() in Element because 1) it is only relevant for some of the subclasses, 2) its implementation has a performance impact and 3) it depends on a method that could be overridden by a class in the inheritance chain between the superclass and a subclass, e.g. SVGElement in my example. Especially because of the third point, I wanted to solve the problem using a template class, as follows (it is a kind of decorator for classes): struct Element { virtual void doSomething() {} }; // T should be an instance of Element template<class T> struct AugmentedElement : public T { // doSomething is expensive and uses T virtual void doSomething() override {} // Used by doSomething virtual bool shouldDoSomething() = 0; }; class SVGElement : public Element { /* ... */ }; class SVGAElement : public AugmentedElement<SVGElement> { // some non-trivial check bool shouldDoSomething() { /* ... */ return true; } }; // Similarly for HTMLAElement and others I looked around (in the existing (huge) codebase and on the internet), but didn't find any similar code snippets, let alone an evaluation of the effectiveness and pitfalls of this approach. Is my design the right way to go, or is there a better way to add common functionality to some subclasses of a given superclass?

    Read the article

  • Managing constant buffers without FX interface

    - by xcrypt
    I am aware that there is a sample on working without FX in the samplebrowser, and I already checked that one. However, some questions arise: In the sample: D3DXMATRIXA16 mWorldViewProj; D3DXMATRIXA16 mWorld; D3DXMATRIXA16 mView; D3DXMATRIXA16 mProj; mWorld = g_World; mView = g_View; mProj = g_Projection; mWorldViewProj = mWorld * mView * mProj; VS_CONSTANT_BUFFER* pConstData; g_pConstantBuffer10->Map( D3D10_MAP_WRITE_DISCARD, NULL, ( void** )&pConstData ); pConstData->mWorldViewProj = mWorldViewProj; pConstData->fTime = fBoundedTime; g_pConstantBuffer10->Unmap(); They are copying their D3DXMATRIX'es to D3DXMATRIXA16. Checked on msdn, these new matrices are 16 byte aligned and optimised for intel pentium 4. So as my first question: 1) Is it necessary to copy matrices to D3DXMATRIXA16 before sending them to the constant buffer? And if no, why don't we just use D3DXMATRIXA16 all the time? I have another question about managing multiple constant buffers within one shader. Suppose that, within your shader, you have multiple constant buffers that need to be updated at different times: cbuffer cbNeverChanges { matrix View; }; cbuffer cbChangeOnResize { matrix Projection; }; cbuffer cbChangesEveryFrame { matrix World; float4 vMeshColor; }; Then how would I set these buffers all at different times? g_pd3dDevice->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer10 ); gives me the possibility to set multiple buffers, but that is within one call. 2) Is that okay even if my constant buffers are updated at different times? And do I suppose I have to make sure the constantbuffers are in the same position in the array as the order they appear in the shader?

    Read the article

  • Implementing `fling` logic without pan gesture recognizers

    - by KDiTraglia
    So I am trying to port over a simple game that I originally wrote to iphone into cocos2d-x. I've hit a minor bump however in implementing simple 'fling' logic I had in the iphone version that is difficult to port over to the c++. In iOS I could get the velocity of a pan gesture very easily: CGPoint velocity = [recognizer velocityInView:recognizer.view]; However now I basically only know where the touch began, where the touch ended, and all the touches that are logged in between. For now I logged all the pts onto a stack then pulled the last point and the 6th to last point (seemed to work the best), find the difference between those pts multiply by a constant and use that as the velocity. It works relatively well, but I'm wondering if anyone else has any better algorithms, when given a bunch of touch pts, to figure out a new speed upon releasing an object that feels natural (Note speed in my game is just a constant x and y, there's no drag or spin or anything tricky like that). Bonus points if anyone has figured out how to get pan gestures into the newest version (3.0 alpha) of cocos2d-x without losing ability to build cross platform.

    Read the article

  • Bitmap Font Displays in Center Always Without Coding it Manually (Fix Coordinate Problem onText)

    - by David Dimalanta
    Is there a way on how to stay the texts in center without manually coding it or something, especially when making an update? I'm making a display for the highest score. Let's say that the score is 9. However, if the score is 9,999,999, the text displays still only at the fixed X and Y coordinate. Is there really a way to stay the text in center especially when there is changes when a player beats the new world record? Here's my code inside Sprite Batch: font.setScale(1.5f); font.draw(batch, "HIGHEST SCORE:", (900/10)*1 + 60, (1280/16)*10); font.draw(batch, "" + 9999999 + "", (900/10)*4, (1280/16)*8); batch.draw(grid_guide, 0, 0, 900, 1280); // --> For testing purpose only. // Where 9999999 is a new record score for example. Here's the image shown as example. I add it some red grid so that I could check if the display of score when updated will always display on center no matter how many digits takes place in. However, it is fixed, so I have to figure it out how to display it automatically on center regardless of the number of digits while updating for the new highscore. I have used the LibGDX preferences very well though to save and load records for the highscore.

    Read the article

  • Set UFW before.rules without restart of server

    - by enedene
    I use UFW on my Ubuntu server. Unfortunately there are no rules in UFW to port forward to another machine. What you need to do is edit /etc/before.rules and put routing commands there, for example # nat Table rules *nat :POSTROUTING ACCEPT [0:0] # Forward traffic from eth0 through eth1. -A POSTROUTING -s 192.168.0.0/24 -o eth1 -j MASQUERADE -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.0.200:80 -A PREROUTING -i eth1 -p udp --dport 10090 -j DNAT --to 192.168.0.202:22 -A PREROUTING -i eth1 -p tcp --dport 10090 -j DNAT --to 192.168.0.202:22 -A PREROUTING -i eth1 -p tcp --dport 443 -j DNAT --to 192.168.0.200:443 -A PREROUTING -i eth1 -p udp --dport 443 -j DNAT --to 192.168.0.200:443 -A PREROUTING -i eth1 -p tcp --dport 57626 -j DNAT --to 192.168.0.2:57626 -A PREROUTING -i eth1 -p udp --dport 57626 -j DNAT --to 192.168.0.2:57626 -A PREROUTING -i eth1 -p tcp --dport 3306 -j DNAT --to 192.168.0.200:3306 -A PREROUTING -i eth1 -p udp --dport 3306 -j DNAT --to 192.168.0.200:3306 COMMIT My problem is that I can't find a way to run new forwarding rules without restarting the server, which I hate to do very much. So please help me, is there a way?

    Read the article

  • Re-installing Ubuntu without losing files, how to?

    - by moraleida
    Sometime back i bought a second PC to serve as my backup machine, but i've never managed to have it as i would like. Now i want to start over, but i've messed so much with it's disks that i'm kinda afraid to lose something on the way, thus this question. Right now, I have a 1Tb disk partitioned like this (as per GParted): /dev/sda1 (ext4) 346.12Gb - Is almost full, has an old install of Ubuntu 11.10. It no longer boots, ever since i installed Windows7 on /sda3. Everything that matters to me is tucked into /var/www/ all the rest can just go. /dev/sda2 (ext4) 196.45Gb - has an old install of 12.04 and nothing important, it's pretty much empty and also doesn't boot. /dev/sda3 (ntfs) 377.97Gb - is my boot partition with Windows 7, some important files and I'd like to keep it untouched. /dev/sda4 (extended) 10.97Gb - was created when i first installed Ubuntu, i think. In my ideal world, I'd like to safely reinstall Ubuntu from the 12.04 liveUSB and merge sda1 and sda2 without losing any files. Is that possible? How?

    Read the article

  • Can I install Ubuntu 13.10 without the internet?

    - by user1526570
    I'm new to Ubuntu and Linux in general. I'm currently out of town and the dorm I am living in has terrible internet connection. It won't be another 2-3 weeks before I can go home and have proper internet connection. So my question is whether or not I can install Ubuntu 13.10 in my laptop without the internet and then do the updates once I go home? Also, I'm attempting to do a dual boot with my Lenovo G505s which was pre-installed with Windows 8. Hopefully I can pull this off. I already did the necessary things (I think and hope so) prior to installation: Disable secure boot Enable legacy and boot UEFI first Create partition Put installer in my pen drive As I am quite new to this, any advice would be of great help. Thanks in advance! EDIT: I tried yesterday. The installation asked me to connect to the internet, so I used my crappy dorm internet. When it reached the downloading/installtion of Ubuntu One, it just stopped and went on forever. So I had to stop it.

    Read the article

  • PERT shows relationships between defined tasks in a project without taking into consideration a time line

    The program evaluation and review technique (PERT) shows relationships between defined tasks in a project without taking into consideration a time line. This chart is an excellent way to identify dependencies of tasks based on other tasks. This chart allows project managers to identify the critical path of a project to minimize any time delays to the project. According to Craig Borysowich in his article “Pros & Cons of the PERT/CPM Method stated the following advantages and disadvantages: “PERT/CPM has the following advantages: A PERT/CPM chart explicitly defines and makes visible dependencies (precedence relationships) between the WBS elements, PERT/CPM facilitates identification of the critical path and makes this visible, PERT/CPM facilitates identification of early start, late start, and slack for each activity, PERT/CPM provides for potentially reduced project duration due to better understanding of dependencies leading to improved overlapping of activities and tasks where feasible.  PERT/CPM has the following disadvantages: There can be potentially hundreds or thousands of activities and individual dependency relationships, The network charts tend to be large and unwieldy requiring several pages to print and requiring special size paper, The lack of a timeframe on most PERT/CPM charts makes it harder to show status although colors can help (e.g., specific color for completed nodes), When the PERT/CPM charts become unwieldy, they are no longer used to manage the project.” (Borysowich, 2008) Traditionally PERT charts are used in the initial planning of a project like in a project that is utilizing the waterfall approach. Once the chart was created then project managers could further analyze this data to determine the earliest start time for each stage in the project. This is important because this information can be used to help forecast resource needs during a project and where in the project. However, the agile environment can approach this differently because of their constant need to be in contact with the client and the other stakeholders.  The PERT chart can also be used during project iteration to determine what is to be worked on next, such as a prioritized To-Do list a wife would give her husband at the start of a weekend. In my personal opinion, the COTS-centric environment would not really change how a company uses a PERT chart in their day to day work. The only thing I can is that there would be less tasks to include in the chart because the functionally milestones are already completed when the components are purchased. References: http://www.netmba.com/operations/project/pert/ http://web2.concordia.ca/Quality/tools/20pertchart.pdf http://it.toolbox.com/blogs/enterprise-solutions/pros-cons-of-the-pertcpm-method-22221

    Read the article

  • How to Code Faster (Without Sacrificing Quality)

    - by ashes999
    I've been a professional coder for a several years. The comments about my code have generally been the same: writes great code, well-tested, but could be faster. So how do I become a faster coder, without sacrificing quality? For the sake of this question, I'm going to limit the scope to C#, since that's primarily what I code (for fun) -- or Java, which is similar enough in many ways that matter. Things that I'm already doing: Write the minimal solution that will get the job done Write a slew of automated tests (prevents regressions) Write (and use) reusable libraries for all kinds of things Use well-known technologies where they work well (eg. Hibernate) Use design patterns where they fit into place (eg. Singleton) These are all great, but I don't feel like my speed is increasing over time. I do care, because if I can do something to increase my productivity (even by 10%), that's 10% faster than my competitors. (Not that I have any.) Besides which, I've consistently gotten this feeback from my managers -- whether it was small-scale Flash development or enterprise Java/C++ development. Edit: There seem to be a lot of questions about what I mean by fast, and how I know I'm slow. Let me clarify with some more details. I worked in small and medium-sized teams (5-50 people) in various companies over various projects and various technologies (Flash, ASP.NET, Java, C++). The observation of my managers (which they told me directly) is that I'm "slow." Part of this is because a significant number of my peers sacrificed quality for speed; they wrote code that was buggy, hard to read, hard to maintain, and difficult to write automated tests for. My code generally is well-documented, readable, and testable. At Oracle, I would consistently solve bugs slower than other team-members. I know this, because I would get comments to that effect; this means that other (yes, more senior and experienced) developers could do my work in less time than it took me, at nearly the same quality (readability, maintainability, and testability). Why? What am I missing? How can I get better at this? My end goal is simple: if I can make product X in 40 hours today, and I can improve myself somehow so that I can create the same product at 20, 30, or even 38 hours tomorrow, that's what I want to know -- how do I get there? What process can I use to continually improve? I had thought it was about reusing code, but that's not enough, it seems.

    Read the article

  • How to install Awesome WM without root access?

    - by ssice
    I want to install the Awesome window manager. In the environment where I want to configure it I don't have root access. I do have a machine were I can be root (I use for this a virtual machine in my laptop). I have tried the following: $ sudo apt-get install awesome The following packages are about to be installed: awesome libev3 libid3tag0 libimlib2 liblua5.1-0 libxcb-icccm1 libxcb-image0 libxcb-keysyms1 libxcb-property1 libxcb-randr0 libxcb-xinerama0 libxcb-xtest0 libxdg-basedir1 menu rlwrap Do you want to continue [Y/n]? n I do now have the list of dependencies for awesome, so I downloaded them all. For that, I did the following. $ pkgs="awesome libev3 libid3tag0 libimlib2 liblua5.1-0 libxcb-icccm1 libxcb-image0 libxcb-keysyms1 libxcb-property1 libxcb-randr0 libxcb-xinerama0 libxcb-xtest0 libxdg-basedir1 menu rlwrap" # this is just for not writing it all ;) $ sudo apt-get install --download-only $pkgs .... $ mkdir -p /tmp/x_debs $ for pkg in $pkgs; do cp /var/cache/apt/archives/$pkg* /tmp/x_debs/; done [ copies all *.deb from my dependencies to /tmp/x_debs ] Now, I want to install the dependencies. For that, I setup a fake dpkg install in my home folder: $ mkdir $HOME/root $ mkdir -p $HOME/root/var/lib/dpkg/{triggers,updates} $ touch $HOME/root/var/lib/dpkg/{available,status} Now I tried to install with dpkg, but I could not: $ dpkg --force-not-root --root=$HOME/root --recursive -i /tmp/x_debs It failed while trying to set permissions for the packages and running chroot. As I do have root access in this machine, I ran it with privileges: $ sudo dpkg --root=$HOME/root --recursive -i /tmp/x_debs Then I had a lot of stuff (i.e., everything: dependencies and the own WM) installed inside $HOME/root. Particularly, xcb-* libraries were installed in $HOME/root/usr/lib and the awesome binary in $HOME/root/usr/bin/awesome. If I try to execute awesome as is I get as an error that libraries could not be loaded. That's normal, as they are not in /usr/lib nor in /lib. So I ran export LD_LIBRARY_PATH=$HOME/root/usr/lib:$HOME/root/lib:${LD_LIBRARY_PATH} and awesome would try to load. However, I could not make gdm to run awesome within gnome or replacing it. I did it this way so I can copy everything in my $HOME/root folder, paste it in the other machine and have it running. Is there any other way (to have less wasted space maybe..) to do this? How can I tell gdm to exec awesome without root access?

    Read the article

  • Dual-boot computer won't boot without external hard drive

    - by FrankP
    I have Ubuntu loaded on my external HDD. I tried to unplug the external drive so that this way I could run Windows as the default OS to boot when the computer turns on, but it gives me an error. I need to know how I can make it so that when my computers boots it stops saying Error: no such device: (a whole bunch of numbers and letters) then it says grub rescue>_. If I plug the external HDD in, and I let Ubuntu run the boot process, then it gives me a list of OS's/ HDD's to choose from and Windows 7 is there. The only problem is that I want Windows be my default OS, not the other way around. P.S. I have found that I dislike Ubuntu because I can't even figure out how to install the necessary programs to learn how to start writing Ruby On Rails. So installing it was a waste of my time, in my opinion. Now that I have it on the external hard drive, I will leave it installed though. I just dont want to have to keep that external drive plugged in to my computer all the time. Thank you a ton to whoever can help me! Thank you for the detail'd instructions. I am doing my best to follow you and it makes sense when I read it but, Rescatux is not doing what you said it would. None of the options you said would appear are not there. On my screen there is 4 options when MBR run's none look familiar and when I picked the best possible option based on my educated guesses it said success. I tried to restart my computer and it said Please insert windows recovery disc and hit enter. Problem being I don't have the windows recovery disc. I bought my computer from a local Computer tec and he loads windows on it for you. I have no time to run my compute over to him as sunday is my only day free. I think that I just wrecked my computer in the process of this attempted fix windows refuses to boot now WITH or WITHOUT the HDD. Please help this is getting out of hand

    Read the article

  • C# 5: At last, async without the pain

    - by Alex.Davies
    For me, the best feature in Visual Studio 11 is the async and await keywords that come with C# 5. I am a big fan of asynchronous programming: it frees up resources, in particular the thread that a piece of code needs to run in. That lets that thread run something else, while waiting for your long-running operation to complete. That's really important if that thread is the UI thread, or if it's holding a lock because it accesses some data structure. Before C# 5, I think I was about the only person in the world who really cared about asynchronous programming. The trouble was that you had to go to extreme lengths to make code asynchronous. I would forever be writing methods that, instead of returning a value, accepted an extra argument that is a "continuation". Then, when calling the method, I'd have to pass a lambda in to it, which contained all the stuff that needed to happen after the method finished. Here is a real snippet of code that is in .NET Demon: m_BuildControl.FilterEnabledForBuilding(     projects,     enabledProjects = m_OutOfDateProjectFinder.FilterNeedsBuilding(         enabledProjects,         newDirtyProjects =         {             // Mark any currently broken projects as dirty             newDirtyProjects.UnionWith(m_BrokenProjects);             // Copy what we found into the set of dirty things             m_DirtyProjects = newDirtyProjects;             RunSomeBuilds();         })); It's just obtuse. Who puts a lambda inside a lambda like that? Well, me obviously. But surely enabledProjects should just be the return value of FilterEnabledForBuilding? And newDirtyProjects should just be the return value of FilterNeedsBuilding? C# 5 async/await lets you write asynchronous code without it looking so stupid. Here's what I plan to change that code to, once we upgrade to VS 11: var enabledProjects = await m_BuildControl.FilterEnabledForBuilding(projects); var newDirtyProjects = await m_OutOfDateProjectFinder.FilterNeedsBuilding(enabledProjects); // Mark any currently broken projects as dirty newDirtyProjects.UnionWith(m_BrokenProjects); // Copy what we found into the set of dirty things m_DirtyProjects = newDirtyProjects; RunSomeBuilds(); Much easier to read! But how is this the same code? If we were on the UI thread, doesn't the UI thread have to block while FilterEnabledForBuilding runs? No, it doesn't, and that's the magic of the await keyword! It cuts your method up into its constituent pieces, much like I did manually with lambdas before. When you run it, only the piece up to the first await actually runs. The rest is passed to FilterEnabledForBuilding as a continuation, which will get called back whenever that method is finished. In the meantime, our thread returns, and can go back to making the UI responsive, or whatever else threads do in their spare time. This is actually a massive simplification, and if you're interested in all the gory details, and speed hacks that the await keyword actually does for you, I recommend Jon Skeet's blog posts about it.

    Read the article

  • SQL SERVER – Attach mdf file without ldf file in Database

    - by pinaldave
    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and sample data. He said that since his database is in a very early stage and is small as of the moment, so he told me that he would like me to have a complete database. My response to him was “Sure! In that case, take a backup of the database and send it to me. I will restore it into my computer and play with it.” He did send me his database; however, his method made me write this quick note here. Instead of taking a full backup of the database and sending it to me, he sent me only the .mdf (primary database file). In fact, I asked for a complete backup (I wanted to review file groups, files, as well as few other details).  Upon calling my friend,  I found that he was not available. Now,  he left me with only a .mdf file. As I had some extra time, I decided to checkout his database structure and get back to him regarding the full backup, whenever I can get in touch with him again. Technical Talk: If the database is shutdown gracefully and there was no abrupt shutdown (power outrages, pulling plugs to machines, machine crashes or any other reasons), it is possible (there’s no guarantee) to attach .mdf file only to the server. Please note that there can be many more reasons for a database that is not getting attached or restored. In my case, the database had a clean shutdown and there were no complex issues. I was able to recreate a transaction log file and attached the received .mdf file. There are multiple ways of doing this. I am listing all of them here. Before using any of them, please consult the Domain Expert in your company or industry. Also, never attempt this on live/production server without the presence of a Disaster Recovery expert. USE [master] GO -- Method 1: I use this method EXEC sp_attach_single_file_db @dbname='TestDb', @physname=N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf' GO -- Method 2: CREATE DATABASE TestDb ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH_REBUILD_LOG GO Method 2: If one or more log files are missing, they are recreated again. There is one more method which I am demonstrating here but I have not used myself before. According to Book Online, it will work only if there is one log file that is missing. If there are more than one log files involved, all of them are required to undergo the same procedure. -- Method 3: CREATE DATABASE TestDb ON ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH GO Please read the Book Online in depth and consult DR experts before working on the production server. In my case, the above syntax just worked fine as the database was clean when it was detached. Feel free to write your opinions and experiences for it will help the IT community to learn more from your suggestions and skills. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Setting up SharePoint without Active Directory

    - by eJugnoo
    In order to setup SharePoint without AD, you need to run following PowerShell command on Management Shell after installing SharePoint on your server, but before running Config Wizard: (we don’t want to run this SP farm in stand-alone mode!) 1. New-SPConfigurationDatabase SYNOPSIS     Creates a new configuration database. SYNTAX     New-SPConfigurationDatabase [-DatabaseName] <String> [-DatabaseServer] <String> [[-DirectoryDomain] <String>] [[-DirectoryOrganizationUnit] <String>]     [[-AdministrationContentDatabaseName] <String>] [[-DatabaseCredentials] <PSCredential>] [-FarmCredentials] <PSCredential> [-Passphrase] <SecureString>      [-AssignmentCollection <SPAssignmentCollection>] [<CommonParameters>] DESCRIPTION     The New-SPConfigurationDatabase cmdlet creates a new configuration database on the specified database server. This is the central database for a new SharePoint farm.     For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation (http://go.microsoft.com/fwlink/?LinkId=163185). RELATED LINKS     Backup-SPConfigurationDatabase     Disconnect-SPConfigurationDatabase     Connect-SPConfigurationDatabase     Remove-SPConfigurationDatabase REMARKS     To see the examples, type: "get-help New-SPConfigurationDatabase -examples".     For more information, type: "get-help New-SPConfigurationDatabase -detailed".     For technical information, type: "get-help New-SPConfigurationDatabase -full". NOTE: Use –AdministrationContentDatabaseName switch to pass the name of Admin database you want instead of GUID-based name it automatically creates. Hence, one can pretty much easily control Admin, Config, and Content database names at the time of farm creation. If creating new farm, you can also delete and re-provision any service databases automatically created, from UI, to decide what database names you want. 2. Run SharePoint Configuration Wizard, and you’ll following as already added to farm. Select do not discconect from farm, and proceed… Select the port, and authentication (NTLM in my case). Click next, and wizard will complete the remaining steps of provisioning, including creation of Central Admin Web App on the desired port. Once successful, it will open the Central Admin site and ask you to run Farm Config Wizard. I chose to skip and do things manually, to remain in control of what is happening on the farm. Like creating web-app for site collections, creating the very first site collection, and any other service applications. I needed this to create a public-facing installation of SharePoint Foundation RTM on a server which didn’t have AD. Now I am going to setup FBA, and possibly Live ID Auth as well. I will be also setting up RBS, and multi-tenancy on this farm ,and would post any notes, and findings here… --Sharad

    Read the article

  • How to Get Windows 7 Theme Wallpapers Without Installing Them

    - by Mysticgeek
    Are you using an older version of Windows but like the Windows 7 theme wallpapers? What if you have Windows 7 but you don’t want to install the themes just to get the wallpapers? Here is how to get them without having to install themes. This guest article was written by Ryan Dozier from the Doztech tech blog. Getting the Wallpaper on XP, Vista, or Windows 7 First download and install 7-zip on your machine (link below). After you’ve installed 7-zip, download a Windows 7 theme (link below) and right-click on the theme, select 7-Zip, and Extract to “Theme Name”… A new folder will appear with the theme name on it. When you open it, there will be a folder called DesktopBackground or something similar.   Open the folder to get the wallpapers to view the wallpapers for the theme. You can delete the extra files and just keep the wallpapers!   Getting the Wallpaper on Ubuntu Extracting the wallpaper on Ubuntu can be a little tricky. Just follow these steps and you will be able to do it. First go to the Ubuntu Software Center under the Applications menu. Search for 7zip and click on the arrow to go to the applications menu. Find the Install button and click it. It will take a couple of minutes for 7zip to install. After 7zip installs, close the Ubuntu Software Center and download a Windows 7 theme. Store it somewhere you can access it quickly. Right-click on the theme and select Rename and get rid of the themepack extension and replace it with zip. The file should be “Theme Name.zip” after you rename it. Right-click on the theme and click Extract Here. After  the extracting you will have a new folder with the theme name. Open it and go into the DesktopBackground folder to get the wallpapers. You can delete the extra files and just keep the wallpapers. If you want to get the new Windows 7 Themes Wallpapers, but don’t want to search and install them separately, this is a nice workaround. Links Get 7 zip for Windows  here Get Windows 7 Themes here Similar Articles Productive Geek Tips Windows 7 Welcome Screen Taking Forever? Here’s the Fix (Maybe)Desktop Fun: Starship Theme WallpapersDesktop Fun: Underwater Theme WallpapersDesktop Fun: Forest Theme WallpapersDesktop Fun: Fantasy Theme Wallpapers TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid Enhance Your Laptop’s Battery Life With These Tips Easily Search Food Recipes With Recipe Chimp

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >