Search Results

Search found 31721 results on 1269 pages for 'adjacency list'.

Page 582/1269 | < Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >

  • Facebook implementation in android

    - by Sanat Pandey
    I am implementing Facebook in my app through FbRocket jar, but it gives some error as ClassNotFound, but I don't know why bcoz i have alredy added that jar in libraries........ Please help me out. 05-09 19:04:28.933: ERROR/AndroidRuntime(759): FATAL EXCEPTION: main 05-09 19:04:28.933: ERROR/AndroidRuntime(759): java.lang.NoClassDefFoundError: net.xeomax.FBRocket.FBRocket 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at org.shopzilla.android.moretab.SettingActivity.shareFacebook(SettingActivity.java:73) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at org.shopzilla.android.moretab.SettingActivity$2.onClick(SettingActivity.java:63) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at android.view.View.performClick(View.java:2485) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at android.view.View$PerformClick.run(View.java:9080) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at android.os.Handler.handleCallback(Handler.java:587) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at android.os.Handler.dispatchMessage(Handler.java:92) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at android.os.Looper.loop(Looper.java:123) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at android.app.ActivityThread.main(ActivityThread.java:3683) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at java.lang.reflect.Method.invokeNative(Native Method) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at java.lang.reflect.Method.invoke(Method.java:507) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:839) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:597) 05-09 19:04:28.933: ERROR/AndroidRuntime(759): at dalvik.system.NativeStart.main(Native Method) Code: package org.shopzilla.android.moretab; import java.util.List; import net.xeomax.FBRocket.FBRocket; import net.xeomax.FBRocket.Facebook; import net.xeomax.FBRocket.ServerErrorException; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.shopzilla.android.common.R; import org.shopzilla.android.facebook.FacebookActivity; import org.shopzilla.android.facebook.FacebookWebOAuthActivity; import org.shopzilla.android.twitter.TwitterActivity; import org.shopzilla.android.twitter.TwitterWebOAuthActivity; import twitter4j.http.RequestToken; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Button; import android.widget.TextView; public class SettingActivity extends Activity{ String bytesSent; HttpClient httpclient; int count1; // List with parameters and their values List<NameValuePair> nameValuePairs; TextView mText; Button btn_facebook; Button btn_twitter; FBRocket fbRocket; RequestToken rToken; String oauthVerifier; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.more_setting); Button btn_twitter = (Button)findViewById(R.id.btn_more_setting_twitter); Button btn_facebook = (Button)findViewById(R.id.btn_More_setting_facebook); btn_twitter.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { // TODO Auto-generated method stub Intent intent = new Intent(SettingActivity.this,TwitterActivity.class); startActivity(intent); //displayTwitterAuthorization(); } }); btn_facebook.setOnClickListener(new View.OnClickListener() { public void onClick(View v) { // TODO Auto-generated method stub /*Intent intent = new Intent(SettingActivity.this,FacebookActivity.class); startActivity(intent);*/ shareFacebook(); //displayFacebookAuthorization(); //shareFacebook(); } }); } public void shareFacebook() { fbRocket = new FBRocket(SettingActivity.this, "ShopZilla", "172619129456913"); if (fbRocket.existsSavedFacebook()) { fbRocket.loadFacebook(); } else { fbRocket.login(R.layout.facebook); } } public void onLoginFail() { fbRocket.displayToast("Login failed!"); fbRocket.login(R.layout.facebook); } public void onLoginSuccess(Facebook facebook) { // TODO Auto-generated method stub fbRocket.displayToast("Login success!"); try { facebook.setStatus("This is your status"); fbRocket.displayDialog("Status Posted Successfully!! " + facebook.getStatus()); } catch (ServerErrorException e) { if (e.notLoggedIn()) { fbRocket.login(R.layout.facebook); } else { System.out.println(e); } } } }

    Read the article

  • Attack from anonymous proxy

    - by mmgn
    We got attacked by some very-bored teenagers registering in our forums and posting very explicit material using anonymous proxy websites, like http://proxify.com/ Is there a way to check the registration IP against a black list database? Has anyone experienced this and had success?

    Read the article

  • Questions about Wordpress 3.0 RC

    - by Nimbuz
    I'm looking to upgrade my blog from Wordpress 2.5 to 3.0 RC, but I'm not sure if: List item It is stable? It will support existing v2.5 plugins? It will support my custom themes? Or do I have to design them from scratch for 3.0? Many thanks for your help!

    Read the article

  • Mac OS X missing disk

    - by leo
    In boot camp, it only see partition size to be 149G while disk utility shows only one partition with size 320G. Why diskutil and df gives me different sizes? Also how can i fix it? thanks df -h Filesystem Size Used Avail Capacity Mounted on /dev/disk0s2 **149Gi 20Gi 129Gi 14% /** devfs 110Ki 110Ki 0Bi 100% /dev map -hosts 0Bi 0Bi 0Bi 100% /net map auto_home 0Bi 0Bi 0Bi 100% /home and diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme ***320.1 GB disk0** 1: EFI 209.7 MB disk0s1 2: Apple_HFS Mac HD 319.6 GB disk0s2

    Read the article

  • Delete files from directory: memory exhausted

    - by codeholic
    This question is a logical continuation of http://serverfault.com/questions/45245/how-can-i-delete-all-files-from-a-directory-when-it-reports-argument-list-too-lo I have drwxr-xr-x 2 doreshkin doreshkin 198291456 Apr 6 21:35 session_data I tried find session_data -type f -delete find session_data -type f | xargs rm -f find session_data -maxdepth 1 -type f -print0 | xargs -r0 rm -f The result is the same: find: memory exhausted What can I do to remove this directory?

    Read the article

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • How to add "most recent emails from this user" to Gmail inbox as a sidebar

    - by Scott B
    I use and love gmail. However, since i use email for customer support, I'm always doing a cross reference lookup via the search feature to see my past conversations with the person whose email I'm reading. I'd love to have a right sidebar widget that shows me, for any email I choose to read, the list of previous conversations/emails with that person. Is this possible? I'm using Chrome Ideally, this sidebar would bump or replace the contextual ads that now display over there.

    Read the article

  • Free application which reports that computers are running

    - by Darqer
    I'm searching for an application which reports that the computer is running. I imagine it in the form of two piece software. the first part, some kind of dashboard with list of active IPs, is on the server and awaits for information from remote hosts, the second part will be on clients and it will be reporting that client is working. Do you know something like this, I'm searching for free application that is lightweight and does not require installation.

    Read the article

  • Does the powershell cmdlet add to or replace out-of-office settings in Exchange 2007?

    - by boost
    When using Powershell to set Out-of-Office in Exchange 2007 (e.g.), do multiple commands containing -StartTime and -EndTime add to some internal list that Exchange maintains or does each successive command replace the previous command? For example we have a staffer who is only in the office Tuesdays and Fridays. We'd like to set up Exchange to send an Out-of-Office message to all internal senders on those days when he's not in. How is this best done?

    Read the article

  • Reliable access to Internet but not local network (not DNS or proxy issues)

    - by Ian Goldby
    I'm looking for help with a Vista Home Premium laptop that has trouble accessing any resource on our home network, but accesses the Internet just fine. The set-up is this: The Vista laptop and a MacBook Pro connect wirelessly to the router-modem. A Synology DS212j NAS drive has a wired connection to the router-modem. Devices on the local network are always referred to by IP address, so this cannot be a DNS issue. The MacBook Pro connects reliably to the NA via AFP (network shared folders), SMB (network shared folders) and HTTP. The Vista laptop connects to and browses sites on the Internet without any problems. It can log into the NAS via SMB and list the shared folders (so there is nothing wrong with the log-in credentials), but when it tries to open any of the folders Explorer just hangs with the spinning cursor for several minutes and then says "\192.168.1.64\shared\Photos is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. The specified network name is no longer available." It can ping the NAS successfully. If I try to open the NAS drive's web interface, the browser just hangs. This is the same with IE, Firefox and Chrome. (There is no proxy.) I can log into the NAS drive with FTP and navigate directories, but when I try to list the contents of a directory with more than a handful of entries, the ftp client hangs. I set up a website on the MacBook. The Vista laptop was able to load some of the pages, but loading any of the images was very hit and miss. Images embedded in HTML pages never worked no matter how many times I reloaded the page, but when I linked directly to the image it did load (though several attempts were sometimes needed). I tried all of this with the Windows Firewall turned off, and with AVG turned off. That made no difference. I'd really appreciate any suggestions anyone can make. The fact that the Vista laptop has trouble with HTTP and FTP as well as SMB connections suggests to me that this is a problem at the TCP level or below. But don't forget it accesses sites outside the LAN with no problems.

    Read the article

  • Should I worry about making my picasa web albums public?

    - by Motti
    I choose the public option for all my albums in Picasaweb, these mostly (90%) contain pictures of my children which I share with my family. Ever so often somebody I don't know adds me as a favorite, at current count I have 7 people in my fan list (non of whom I know) and only three of them have any public albums. Is this creepy? I take care not to upload any pictures that may attract perverts What would you recommend, private by default or continue with public?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Password manager bug with Firefox 3.6.13

    - by Nicolas Buduroi
    I'm having trouble with the latest Firefox (3.6.13) password manager. For a website I'm working on, it doesn't fill the password field for any login credential saved. I've looked into the options "Saved passwords" list and they are all there with the correct passwords. I thought at first that the website was blocking this feature in some way, but the password managers in Chrome (on the same Windows 7 machine) and Iceweasel (in a virtual Debian 6 machine) work well. Any idea about what could cause this problem?

    Read the article

  • Solaris SPARC 10 32bit mode

    - by TM.
    I'm looking for a definitive answer, does Solaris 10 running on a SPARC machine support booting into 32bit mode? I've found one site that states Solaris 8 was the last version that supported booting in a 32bit mode for SPARC. I've read multiple items that explain how to boot Solaris into 32bit mode, however they did not list the Solaris version. We've tried all the ways specified, but the system keeps booting into 64bit mode.

    Read the article

  • Can't connect my 3G to home router umbrella

    - by Cindi
    I can't connect my 3G iphone to my home router. I added my MAC wifi address (followed those directions to locate it) to the filter of the router. My iphone wifi address appears on the allowed list. However, when I try to connect using my iphone, it shows wi-fi not connected, and the umbrella corporation comes up still locked and asks for a password. We don't have a password set up for our router access at home. I'm not real tech-y so speak plainly...thanks

    Read the article

  • Any software to remove $NtUninstallxxxxx (Windows XP)

    - by Michael
    Any commercial or free software to give me a list and descriptions of patch uninstalls and let me remove selected ones? I've tried Windows XP Update Remover, but seems it doesn't provide any information for majority of items and I have to delete one by one... I also know I can do it manually, but just wondering if there is more professional software to make it more accurate and quick.

    Read the article

  • How do I print filenames residing on the server? [on hold]

    - by Suhail Gupta
    How do I list files on a server after I make a HTTP connection via telnet with that server ? I tried establishing connection like : As I push enter, I enter the following screen : but as I type ls (which is not visible when I write) and press enter, I see this screen : I want to establish connection with the server and try to print the file names residing on the server directory. Note : The server OS is Windows Server 2003 and web-server is Microsoft-IIS/6.0

    Read the article

  • WSS audiences dont show all AD Groups

    - by Mike
    Not all of my AD Groups are showing in the Audience list. I want to create a new audience based on if the user is in a group. Some AD groups show up, and others don't. Many the newer ones do not show up. My connection is pulling (im pretty sure) the whole AD via primary domain controller. MOS2007 win2003 Any ideas?

    Read the article

  • Cisco NAT vs Bridge vs BVI

    - by cjavapro
    The only devices on this particular LAN will all have public IP addresses. Also the public IP address will be configured directly on the machine,,, so we will not translate private/public IP addresses. If we use NAT,, we would have to translate the public IP on the WAN to the public IP on the LAN. The only security feature I expect on the gateway is an access list. I don't really know much about networking, so I am sorry if this question is generic.

    Read the article

< Previous Page | 578 579 580 581 582 583 584 585 586 587 588 589  | Next Page >