Search Results

Search found 97790 results on 3912 pages for 'one stuck pixel'.

Page 52/3912 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • extracting a quadrilateral image to a rectangle

    - by Will
    In the image below, the sign on the side of the van is not face-on to the camera. I want to calculate, as best I can with the pixels I have, what it'd look like face on. I imagine that this is some kind of loop through the x and y axis doing a Bresenham's line on both dimensions at once with some kind of mixing when pixels in the source image overlap - some sub-pixel mixing of some sort? What approaches are there, and how do you mix the pixels? Is there a standard approach for this?

    Read the article

  • How to save esri map as a image file

    - by Jin
    Hi, I am using Silverlight 3 and I am trying to take a screenshot of esri map. I was able to take a screenshot and save as a file for silverlight controls, but when I try to access Esri map image, I get "Pixel access not allowed" error. I heard this is because of different domain (I am trying to get map image on the client side, and map image is not accessible at server side in my silverlight application). So I am trying to find a function from esri so that I can save the map image as a file. does anybody know how to do this? or any other way around?

    Read the article

  • Test A SSH Connection from Windows commandline

    - by IguanaMinstrel
    I am looking for a way to test if a SSH server is available from a Windows host. I found this one-liner, but it requires the a Unix/Linux host: ssh -q -o "BatchMode=yes" user@host "echo 2>&1" && echo "UP" || echo "DOWN" Telnet'ing to port 22 works, but that's not really scriptable. I have also played around with Plink, but I haven't found a way to get the functionality of the one-liner above. Does anyone know Plink enough to make this work? Are there any other windows based tools that would work? Please note that the SSH servers in question are behind a corporate firewall and are NOT internet accessible. Arrrg. Figured it out: C:\>plink -batch -v user@host Looking up host "host" Connecting to 10.10.10.10 port 22 We claim version: SSH-2.0-PuTTY_Release_0.62 Server version: SSH-2.0-OpenSSH_4.7p1-hpn12v17_q1.217 Using SSH protocol version 2 Server supports delayed compression; will try this later Doing Diffie-Hellman group exchange Doing Diffie-Hellman key exchange with hash SHA-256 Host key fingerprint is: ssh-rsa 1024 aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa:aa Initialised AES-256 SDCTR client->server encryption Initialised HMAC-SHA1 client->server MAC algorithm Initialised AES-256 SDCTR server->client encryption Initialised HMAC-SHA1 server->client MAC algorithm Using username "user". Using SSPI from SECUR32.DLL Attempting GSSAPI authentication GSSAPI authentication initialised GSSAPI authentication initialised GSSAPI authentication loop finished OK Attempting keyboard-interactive authentication Disconnected: Unable to authenticate C:\>

    Read the article

  • Whats the best cloud backup solution for a small scale server envoirnment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing.

    Read the article

  • Whats the best cloud backup solution for a small scale server environment?

    - by nbv4
    I have a server that runs a postgres database that contains about 200MB of data. Currently I have a cron job setup on my home computer which: ssh's into my server runs a remote script which makes a backup of the database scp's that dump over to my local hard drive for storage. Each dump is 20MiB. does this every six hours (one months of backups is roughly 2GiB) The problem with this setup is that if my local machine goes down for whatever reason, no backups will be made. Also, I can't have the cron run from the server, because I can't have it scp'd to my local machine from my server (firewalls and all that crap). My local machine is running Ubuntu 10.04, and my server is Ubuntu 9.10 server edition. I looked into Ubuntu One, but currently it's gui-only. I also looked into dropbox, but it's a pain in the ass to get setup in linux without gui support. Amazon S3 looks good but it's not free (yet dirt cheap). Is there any other alternative that I should look into? I'd prefer something where I can just have my script dump the database into a directory, and have the backup service 'watch' that folder and sync accordingly. I can maybe also have my local machine sync to the cloud backup so I have even more redundancy, plus easy access to my backups for use in testing. Edit: My server is a VPS, so what solution I end up using has to be 100% software based.

    Read the article

  • Separate domains vs. one domain with alias-domains

    - by Quasdunk
    I have tried to ask this question a few days ago but I'm afraid it was not clear enough, so here's another try. I have set up a LAMP-server using ISPConfig 3 for the administration. PHP is running over Fast-CGI. I have several domains, like my_site.com, my_site.net and my_site.org, but they all point to the same application/website. Each domain has its own web-root-folder and is running under its own user. The application itself is in a common directory which is owned by another user, like so: # path to my_application (owned by web1) /var/www/clients/client1/web1/web/my_application/ # sym-link to my_application from my_site.com-web-root (owned by web5) /var/www/my_site.com/web -> /var/www/clients/client1/web1/web/ # sym-link to my_application from my_site.net (owned by web4) /var/www/my_site.net/web -> /var/www/clients/client1/web1/web/ With a setup like this I have encountered a few problems concerning the permissions when performing filesystem-operations with PHP. For instance, if the application is called via my_site.com, the user web5 is trying to write something to the application-folder. But the application-folder is owned by the user web1, so web5 is not allowed to write there. As far as I unterstand, this is how Fast-CGI works. After some research and asking a few people, the solution seems to be to break it all down to one domain (e.g. my_site.com) and define the other domains (my_site.org, my_site.net) as alias for this one domain. That way, there would be only one user who has all necessary permissions. However, this would mean that we'd have to buy a multidomain SSL-certificate - but we already have an SSL-certificate for each domain. We were able to use them with our previous provider (managed hosting), and there we also had only one web-directory and multiple domains. So if this was possible, I wonder: Is putting all the domains together into one v-host with one main- and several alias-domains the right approach in this case? Or may I have misunderstood something?

    Read the article

  • Android NDK Gaussian Blur radius stuck at 60

    - by rennoDeniro
    I implemented this NDK imeplementation of a Gaussian Blur, But I am having problems. I cannot increase the radius above 60, otherwise the activity just closes returning to a previous activity. No error message, nothing? Does anyone know why this could be? Note: This blur is based on the quasimondo implementation, here #include <jni.h> #include <string.h> #include <math.h> #include <stdio.h> #include <android/log.h> #include <android/bitmap.h> #define LOG_TAG "libbitmaputils" #define LOGI(...) __android_log_print(ANDROID_LOG_INFO,LOG_TAG,__VA_ARGS__) #define LOGE(...) __android_log_print(ANDROID_LOG_ERROR,LOG_TAG,__VA_ARGS__) typedef struct { uint8_t red; uint8_t green; uint8_t blue; uint8_t alpha; } rgba; JNIEXPORT void JNICALL Java_com_insert_your_package_ClassName_functionToBlur(JNIEnv* env, jobject obj, jobject bitmapIn, jobject bitmapOut, jint radius) { LOGI("Blurring bitmap..."); // Properties AndroidBitmapInfo infoIn; void* pixelsIn; AndroidBitmapInfo infoOut; void* pixelsOut; int ret; // Get image info if ((ret = AndroidBitmap_getInfo(env, bitmapIn, &infoIn)) < 0 || (ret = AndroidBitmap_getInfo(env, bitmapOut, &infoOut)) < 0) { LOGE("AndroidBitmap_getInfo() failed ! error=%d", ret); return; } // Check image if (infoIn.format != ANDROID_BITMAP_FORMAT_RGBA_8888 || infoOut.format != ANDROID_BITMAP_FORMAT_RGBA_8888) { LOGE("Bitmap format is not RGBA_8888!"); LOGE("==> %d %d", infoIn.format, infoOut.format); return; } // Lock all images if ((ret = AndroidBitmap_lockPixels(env, bitmapIn, &pixelsIn)) < 0 || (ret = AndroidBitmap_lockPixels(env, bitmapOut, &pixelsOut)) < 0) { LOGE("AndroidBitmap_lockPixels() failed ! error=%d", ret); } int h = infoIn.height; int w = infoIn.width; LOGI("Image size is: %i %i", w, h); rgba* input = (rgba*) pixelsIn; rgba* output = (rgba*) pixelsOut; int wm = w - 1; int hm = h - 1; int wh = w * h; int whMax = max(w, h); int div = radius + radius + 1; int r[wh]; int g[wh]; int b[wh]; int rsum, gsum, bsum, x, y, i, yp, yi, yw; rgba p; int vmin[whMax]; int divsum = (div + 1) >> 1; divsum *= divsum; int dv[256 * divsum]; for (i = 0; i < 256 * divsum; i++) { dv[i] = (i / divsum); } yw = yi = 0; int stack[div][3]; int stackpointer; int stackstart; int rbs; int ir; int ip; int r1 = radius + 1; int routsum, goutsum, boutsum; int rinsum, ginsum, binsum; for (y = 0; y < h; y++) { rinsum = ginsum = binsum = routsum = goutsum = boutsum = rsum = gsum = bsum = 0; for (i = -radius; i <= radius; i++) { p = input[yi + min(wm, max(i, 0))]; ir = i + radius; // same as sir stack[ir][0] = p.red; stack[ir][1] = p.green; stack[ir][2] = p.blue; rbs = r1 - abs(i); rsum += stack[ir][0] * rbs; gsum += stack[ir][1] * rbs; bsum += stack[ir][2] * rbs; if (i > 0) { rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; } else { routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; } } stackpointer = radius; for (x = 0; x < w; x++) { r[yi] = dv[rsum]; g[yi] = dv[gsum]; b[yi] = dv[bsum]; rsum -= routsum; gsum -= goutsum; bsum -= boutsum; stackstart = stackpointer - radius + div; ir = stackstart % div; // same as sir routsum -= stack[ir][0]; goutsum -= stack[ir][1]; boutsum -= stack[ir][2]; if (y == 0) { vmin[x] = min(x + radius + 1, wm); } p = input[yw + vmin[x]]; stack[ir][0] = p.red; stack[ir][1] = p.green; stack[ir][2] = p.blue; rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; rsum += rinsum; gsum += ginsum; bsum += binsum; stackpointer = (stackpointer + 1) % div; ir = (stackpointer) % div; // same as sir routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; rinsum -= stack[ir][0]; ginsum -= stack[ir][1]; binsum -= stack[ir][2]; yi++; } yw += w; } for (x = 0; x < w; x++) { rinsum = ginsum = binsum = routsum = goutsum = boutsum = rsum = gsum = bsum = 0; yp = -radius * w; for (i = -radius; i <= radius; i++) { yi = max(0, yp) + x; ir = i + radius; // same as sir stack[ir][0] = r[yi]; stack[ir][1] = g[yi]; stack[ir][2] = b[yi]; rbs = r1 - abs(i); rsum += r[yi] * rbs; gsum += g[yi] * rbs; bsum += b[yi] * rbs; if (i > 0) { rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; } else { routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; } if (i < hm) { yp += w; } } yi = x; stackpointer = radius; for (y = 0; y < h; y++) { output[yi].red = dv[rsum]; output[yi].green = dv[gsum]; output[yi].blue = dv[bsum]; rsum -= routsum; gsum -= goutsum; bsum -= boutsum; stackstart = stackpointer - radius + div; ir = stackstart % div; // same as sir routsum -= stack[ir][0]; goutsum -= stack[ir][1]; boutsum -= stack[ir][2]; if (x == 0) vmin[y] = min(y + r1, hm) * w; ip = x + vmin[y]; stack[ir][0] = r[ip]; stack[ir][1] = g[ip]; stack[ir][2] = b[ip]; rinsum += stack[ir][0]; ginsum += stack[ir][1]; binsum += stack[ir][2]; rsum += rinsum; gsum += ginsum; bsum += binsum; stackpointer = (stackpointer + 1) % div; ir = stackpointer; // same as sir routsum += stack[ir][0]; goutsum += stack[ir][1]; boutsum += stack[ir][2]; rinsum -= stack[ir][0]; ginsum -= stack[ir][1]; binsum -= stack[ir][2]; yi += w; } } // Unlocks everything AndroidBitmap_unlockPixels(env, bitmapIn); AndroidBitmap_unlockPixels(env, bitmapOut); LOGI ("Bitmap blurred."); } int min(int a, int b) { return a > b ? b : a; } int max(int a, int b) { return a > b ? a : b; }

    Read the article

  • Two python distributions, sudo picking the wrong one

    - by DHK
    I'm back to Linux after an over 10 year abstinence (fool me thinks). And a little rusty in the sys admin department. I'm faced with an issue with my python distribution. I'm using Python 2.7, but based on the Anaconda flavour. I followed the standard guidance but recently I discovered an issue that I'm not sure how to fix. Under sudo, the standard Python as comes with Ubuntu is provided. Under my user account python points to the Anaconda version: dhk@localhost:~/home/$which python /opt/anaconda/bin/python dhk@localhost:~/home/$sudo which python /usr/bin/python This is an issue as using sudo pip [anything] usually acts on the wrong directory, yet I cannot use it without sudo.

    Read the article

  • Managing multiple reverse proxies for one virtual host in apache2

    - by Chris Betti
    I have many reverse proxies defined for my js-host VirtualHost, like so: /etc/apache2/sites-available/js-host <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPreserveHost On ProxyPass /serviceA http://192.168.100.50/ ProxyPassReverse /serviceA http://192.168.100.50/ ProxyPass /serviceB http://192.168.100.51/ ProxyPassReverse /serviceB http://192.168.100.51/ [...] ProxyPass /serviceZ http://192.168.100.75/ ProxyPassReverse /serviceZ http://192.168.100.75/ </VirtualHost> The js-host site is acting as shared config for all of the reverse proxies. This works, but managing the proxies involves edits to the shared config, and an apache2 restart. Is there a way to manage individual proxies with a2ensite and a2dissite (or a better alternative)? My main objective is to isolate each proxy config as a separate file, and manage it via commands. First Attempt I tried making separate files with their own VirtualHost entries for each service: /etc/apache2/sites-available/js-host-serviceA <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPass /serviceA http://192.168.100.50/ ProxyPassReverse /serviceA http://192.168.100.50/ </VirtualHost> /etc/apache2/sites-available/js-host-serviceB <VirtualHost *:80> ServerName js-host.example.com [...] ProxyPass /serviceB http://192.168.100.51/ ProxyPassReverse /serviceB http://192.168.100.51/ </VirtualHost> The problem with this is apache2 loads the first VirtualHost for a particular ServerName, and ignores the rest. They aren't "merged" somehow as I'd hoped.

    Read the article

  • One National Team One Event &ndash; SharePoint Saturday Kansas City

    - by MOSSLover
    I wasn’t expect to run an event from 1,000 miles away, but some stuff happened you know like it does and I opted in.  It was really weird, because people asked why are you living in NJ and running Kansas City?  I did move, but it was like my baby and Karthik didn’t have the ability to do it this year.  I found it really challenging, because I could not physically be in Kansas City.  At first I was freaking out and Lee Brandt, Brian Laird, and Chris Geier offered to help.  Somehow I couldn’t come the day of the event.  Time-wise it just didn’t work out.  I could do all the leg work prior to the event, but weekends just were not good.  I was going to be in DC until March or April on the weekdays, so leaving that weekend was too tough.  As it worked out Lee was my eyes and ears for the venue.  Brian was the sponsor and prize box coordinator if anyone needed to send items.  Lee also helped Brian the day of the event move all the boxes.  I did everything we could do electronically, such as get the sponsors coordinate with Michael Lotter on invoicing and getting the speakers, posting the submissions, budgeting the money, setting up a speaker dinner by phone, plus all that other stuff you do behind the scenes.  Chris was there to help Lee and Brian the day of the event and help us out with the speaker dinner.  Karthik finally got back from India and he was there the night before getting the folders together and the signs and stuffing it all.  Jason Gallicchio also helped me out (my cohort for SPS NYC) as he did the schedule and helped with posting the speakers abstracts and so did Chris Geier by posting the bios.  The lot of them enlisted a few other monkeys to help out.  It was the weirdest thing I’ve ever seen, but it worked.  Around 100+ attendees ended up showing and I hear it was  a great event.  Jason, Michael, Chris, Karthik, Brian, and Lee are not all from the same area, but they helped me out in bringing this event together.  It was a national SharePoint Saturday team that brought together a specific local event for Kansas City.  It’s like a metaphor for the entire SharePoint Community.  We help our own kind out we don’t let me fail.  I know Lee and Brian aren’t technically SharePoint People they are honorary SharePoint Community Members.  Thanks everyone for the support and help in bringing this event together.  Technorati Tags: SharePoint Saturday,SPS KC,SharePoint,SharePoint Saturday Kanas City,Kansas City

    Read the article

  • Microsoft Silverlight MVP one more time

    - by pluginbaby
    Another wonderful first email of the year… announcing that I’ve just been re-awarded Most Valuable Professional (MVP) by Microsoft for Silverlight. This is my 5th year as an MVP in a row and I am still very honoured and excited! In 2010 I had the pleasure to be involved in many community events around Silverlight, speaking at Microsoft conferences and user groups (doing the launch of the Vancouver Silverlight User Group was fun!), as well as taking part in worldwide conference like MIX Las Vegas and the MVP Summit in Redmond. Also I did new kind of activities in 2010: I wrote questions for the first Microsoft Silverlight certification exam (70-506), and I was Technical reviewer of 3 Silverlight books. I finally started to share more on Twitter @LaurentDuveau. In 2010 the content of this blog was mostly about Silverlight, I expect it to be the same in 2011, plus a touch of Windows Phone as well. I already know that 2011 will be hell of a good year.. I’ll be at the next MVP Summit in Seattle, also speaker at DevTeach which comes back to Montreal (at last!) and have some nice Silverlight trainings plans for France and Tunisia. More than that, my business RunAtServer is healthy (proud of my team!) and I have insane news and a very big surprise coming on that front.... stay tuned! Happy New Year!

    Read the article

  • How to Get All the Windows 8 Editions on One Install Disk

    - by Taylor Gibb
    There are a lot of different versions of Windows, but you probably didn’t know that short of the Enterprise edition, the disc or image that you own contains all versions for that architecture. Read on to see how we can use them to make a universal Windows 8 install disc. Things You Will Need A x86 Version of Windows 8 A x64 Version of Windows 8 A x86 Version of Windows 8 Enterprise A x64 Version of Windows 8 Enterprise A Windows 8 PC Note: While we will use all the images above you don’t really need the Enterprise Edition. You could always leave out parts of the tutorial if you know what you are doing, if you are not comfortable with that and still want to follow through you could always grab the Enterprise evaluation images that are available for free to the public, on MSDN. Getting Started To get started you will need to Download the Windows 8 ADK from Microsoft. Once downloaded go ahead and install it, you will only need the Deployment tools so be sure to uncheck the rest of the options. Lastly you will also need to create the following folder structure on the root of your C:\ drive to make things a bit easier. C:\Windows8Root C:\Windows8Root\x86 C:\Windows8Root\x64 C:\Windows8Root\Enterprisex86 C:\Windows8Root\Enterprisex64 C:\Windows8Root\Temp C:\Windows8Root\Final OK lets get started. Making The Image The first thing we need to do is create a base image, so mount the x86 version of Windows 8 and copy its files to: C:\Windows8Root\Final Now move the install.wim file from: C:\Windows8Root\Final\sources To: C:\Windows8Root\x86 Next go ahead and copy the install.wim file from the other 3 images, Windows 8 x64, Windows 8 Enterprise x86 and Windows 8 Enterprise x64 to the respective folders in Windows8Root, the install.wim file can be located at: D:\sources\install.wim Note: The above assumes that the images are always mounted at drive D. Remember that each install.wim is different so don’t copy them to the wrong directories or the rest of the tutorial wont work. Next switch to the Metro Start Screen and open the Deployment and Imaging Tools Environment. Note: If you are not a local administrator on your PC, you will need to right-click on it and choose to run it as an administrator. Now run the following commands: Dism /Export-Image /SourceImageFile:c:\Windows8Root\x86\install.wim /SourceIndex:2 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8″ /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x86\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x86\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro with Media Center” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\Enterprisex86\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Enterprise” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x64\install.wim /SourceIndex:2 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8″ /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x64\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\x64\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Pro with Media Center” /compress:maximum Dism /Export-Image /SourceImageFile:c:\Windows8Root\Enterprisex64\install.wim /SourceIndex:1 /DestinationImageFile:c:\Windows8Root\Final\sources\install.wim /DestinationName:”Windows 8 Enterprise” /compress:maximum Next navigate to: C:\Windows8Root\sources\ And create a new text file. You will need to call it: EI.cfg Then edit it to look like the following: The last thing we need to do is work some magic to get Windows Media Center added to the WMC editions of Windows 8. For that I have written a little script to make it easier for everybody, you can grab it here. Once you have downloaded it extract it. In order to use it right-click in the bottom left hand corner of the screen, and open an elevated command prompt. Then go ahead and paste the following into the command prompt window. powershell.exe -ExecutionPolicy Unrestricted -File C:\Users\Taylor\Documents\HTGWindows8Converter.ps1 Note: You will need to replace the path to the script, another thing to note is that if the path you replace it with has spaces you will need to enclose the path in quotes. The script should kick off straight away and has some progress bars you can watch while it does its thing. Half way through another Window will pop open, which will start creating your final ISO image. When its complete, close the command prompt and you should have an ISO image on the root of your C drive called: HTGWindows8.iso That’s all there is to it. 7 Ways To Free Up Hard Disk Space On Windows HTG Explains: How System Restore Works in Windows HTG Explains: How Antivirus Software Works

    Read the article

  • SEO: Moving articles from one domain to another

    - by Melanie
    Currently I have articles up on a website (site A) that is not mine (but I can remove the articles.) The article's aren't faring well (not only due to the recent Google changes) but because they really could do better if I made some tweaks myself instead of relying on the domain owner's SEO skills. So I would like to set up my own website and have just my articles on it (site B.) In the past when I've moved content, I've set up redirects but this time I can't do that. What would be the best way to move the articles without having to worry about them being counted as duplicate content or any other lame stuff? Should I, A: Save the articles on my computer and remove them from Site A. Wait for Google to remove them from the index (several months.) B: Remove the articles from Site A and immediately place them on Site B.

    Read the article

  • Integrating FedEx Web Services into .Net, stuck at step 1

    - by Matt Dawdy
    I'm signed up, I've downloaded sample code, I've got a WSDL...and yet I have no idea how to get this stuff into my existing .Net application. The WSDL was in a zip file, not a URL so I can't just "Add Web Reference." I've run the wsdl tool from the .Net command prompt, and it made a nice class for me...yet dropping that into my web_reference folder doesn't give me any kind of instantiatable class. I know I'm missing something stupid. Can someone point me in the right direction please?

    Read the article

  • JMX Based Monitoring - Part One

    - by Anthony Shorten
    In all versions of the Oracle Utilities Application Framework there is an ability to use Java Management eXtensions (JMX) to both manage and monitor the various components of the product. This means that sites can use a JSR120 compliant JMX browser or JMX console to view or manage the components of the product with little or no configuration required. In each version we have progressively added JMX capabilities to allow IT groups more detailed information. In Oracle Utilities Application Framework V2.1 and above it was possible to use JMX on the Web Application Server provided Mbeans to allow you to monitor the online component of the product as well as manage the configuration. Also with a few additional java options it is possible to get a good level of detail about the Java Virtual machine including memory and thread usage. In Oracle Utilities Application Framework V2.2 and above, we added support for Java 5 statistics (Java enabled them by default), database pool statistics and also added the ability to manage and moinitor the batch component of the architecture. Now, in Oracle Utilities Application Framework V4 and above, we added support for Java 6 MXBeans, online management of the cache using JMX, additional JVM information and Performance monitoring using JMX. JMX allows the product to be managed from a common console such as Oracle Enterprise Manager, Tivoli, HP OpenView (and a lot more). Over the next week or so I will be compiling a set of blog entries discussing what is available (in summary format) using JMX and how to get access to the JMX statistics for your version of the product.

    Read the article

  • org.apache.commons.httpclient.HttpClient stuck on request

    - by Roman
    Hi All I have that code : while(!lastPage && currentPage < maxPageSize){ StringBuilder request = new StringBuilder("http://catalog.bizrate.com/services/catalog/v1/us/" + " some more ..."); currentPage++; HttpClient client = new HttpClient(new MultiThreadedHttpConnectionManager()); client.getHttpConnectionManager().getParams().setConnectionTimeout(15000); GetMethod get = new GetMethod(request.toString()); HostConfiguration configuration = new HostConfiguration(); int iGetResultCode = client.executeMethod(configuration, get); if (iGetResultCode != HttpStatus.SC_OK) { System.err.println("Method failed: " + get.getStatusLine()); return; } XMLStreamReader reader = XMLInputFactory.newInstance().createXMLStreamReader(get.getResponseBodyAsStream()); while (reader.hasNext()) { int type = reader.next(); // some more xml parsing ... } reader.close(); get.releaseConnection(); } Somehow the code gets suck from time to time on line : executing request. I cant find the configuration for a request time out (not the connection timeout) , can someone help me maybe , or is there something that I am doing basely wrong ? The client I am using.

    Read the article

  • Upgrading only several packages, ou packages from one source

    - by Cédric Girard
    we use Deb/apt system to deploy PHP softwares (around 200 + libraries with dependencies). We have a build server, scripts, and a private repository. It's ok and run fine but we want to update ours packages very often, and update Ubuntu packages only when our adminsys have time to handle them. How can we do? The only solution I see for now is to iterate on package list and do a apt-get install $packagename. Not very easy or even resilient. Another idea?

    Read the article

  • One Api Pilot

    - by Manish Agrawal
    Presentations made at Mobile World Congress, MWC 2010, on the Canadian OneAPI Pilot by Graham Trickey (GSMA), and Shane Logan (Telus). Thanks Alan for sharing it.

    Read the article

  • App engine index building stalled stuck

    - by Alexander
    Hi, I am having a problem with indexes building in my App Engine application. There are only about 200 entities in the indexes that are being built, and the process has now been running or over 24 hours. My application name is romanceapp. Is there any way that I can re-start or clear the indexes that are being built? Thank you and kind regards Alexander M.

    Read the article

  • VB FFT - stuck understanding relationship of results to frequency

    - by WaveyDavey
    Trying to understand an fft (Fast Fourier Transform) routine I'm using (stealing)(recycling) Input is an array of 512 data points which are a sample waveform. Test data is generated into this array. fft transforms this array into frequency domain. Trying to understand relationship between freq, period, sample rate and position in fft array. I'll illustrate with examples: ======================================== Sample rate is 1000 samples/s. Generate a set of samples at 10Hz. Input array has peak values at arr(28), arr(128), arr(228) ... period = 100 sample points peak value in fft array is at index 6 (excluding a huge value at 0) ======================================== Sample rate is 8000 samples/s Generate set of samples at 440Hz Input array peak values include arr(7), arr(25), arr(43), arr(61) ... period = 18 sample points peak value in fft array is at index 29 (excluding a huge value at 0) ======================================== How do I relate the index of the peak in the fft array to frequency ?

    Read the article

  • SQL SERVER – Copy Statistics from One Server to Another Server

    - by pinaldave
    I was recently working on a performance tuning project in Dubai (yeah I was able to see the tallest tower from the window of my work place). I had a very interesting learning experience there. There was a situation where we wanted to receive the schema of original database from a certain client. However, the client was not able to provide us any data due to privacy issues. The schema was very important because without having an access to underlying data, it was a bit difficult to judge the queries etc. For example, without any primary data, all the queries are running in 0 (zero) milliseconds and all were using nested loop as there were no data to be returned. Even though we had CPU offending queries, they were not doing anything without the data in the tables. This was really a challenge as I did not have access to production server data and I could not recreate the scenarios as production without data. Well, I was confused but Ruben from Solid Quality Mentors, Spain taught me new tricks. He suggested that when table schema is generated, we can create the statistics consequently. Here is how we had done that: Once statistics is created along with the schema, without data in the table, all the queries will work as how they will work on production server. This way, without access to the data, we were able to recreate the same scenario as production server on development server. When observed at the script, you will find that the statistics were also generated along with the query. You will find statistics included in WITH STATS_STREAM clause. What a very simple and effective script. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology Tagged: SQL Statistics, Statistics

    Read the article

  • How can one compile Darwinia under Linux?

    - by Tobias Kienzler
    Introversion is now offering the Darwinia+Multiwinia source for sale, stating Note: You will need Windows and Visual Studio 2008 to build the games. We have tested that the code compiles correctly on the PC, but you will need to put some effort in to compile for Mac / Linux. There is no Xbox code in this release. Has anyone put this effort in already? The best answer would (be yes and) mention modifications that had to be done (also mentioning the distribution used), the second-to-best would explain why it doesn't work right now. Since I haven't bought the source pack I'm relying on up-votes as confirmation, so please comment on answers if something doesn't work or has to be modified e.g. for another Linux distribution. I'm currently using Ubuntu 8.04, but 10.04 or e.g. Gentoo would be a choice, too. EDIT: Clarification: The intention is to make a new game with that engine, but since this question is a prerequisite, it seems suitable here. UPDATE It is a bit off topic, but for those interested, Introversion added the source code of Uplink, Darwinia, Multiwinia and DEFCON to The Humble Introversion Bundle, so don't miss it!

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >