Search Results

Search found 426 results on 18 pages for 'geo'.

Page 9/18 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Edit very large sql dump/text file (on linux)

    - by geo
    I have to import a large mysql dump (up to 10G). However the sql dump already predefined with a database structure with index definition. I want to speed up the db insert by removing the index and table definition. That means I have to remove/edit the first few lines of a 10G text file. What is the most efficient way to do this on linux? Programs that require loading the entire file into RAM will be an overkill to me.

    Read the article

  • Using Option all over the place feels a bit awkward. Am I doing something wrong?

    - by Geo
    As a result of articles I read about the Option class which helps you avoid NullPointerException's, I started to use it all over the place. Imagine something like this: var file:Option[File] = None and later when I use it: val actualFile = file.getOrElse(new File("nonexisting")) if(actualFile.getName.equals("nonexisting")) { // instead of null checking } else { // value of file was good } Doing stuff like this doesn't feel all that "right" to me. I also noticed that .get has become deprecated. . Is this sort of stuff what you guys are doing with Option's too, or am I going the wrong way?

    Read the article

  • A bounce-rate attack to manipulate SEO ?

    - by Denis Volovik
    This is a question to experienced people that might help us shed some light on the issue. We noticed a very strange behavior on our site, in Google Analytics. Some dude from Finland, namely, from Kouvola city is hitting one of our pages - only one page on our site, 'bout a hundred times per day, all with an average bounce rate of 90%+... This is causing our overall bounce rate to go up by 1 to 3% per day... which is very disturbing.. since we're trying to do our best in order to keep it as low as possible. And obviously having it jumped from ~24% to 27%, just because of that crazy dude is not making us happy at all... We tried implementing a geo-targeted script in order to catch this particular visitor and deliver him a juicy message, and it seemed like it helped in the beginning, it has stopped for a day or two, but now he's back... The geo-targeted script was also logging all IP addresses for page requests originating from Finland in order to find out more details and (in order to block them on the server level, later).. but thing is, it was all mainly cable or DSL connections with various, but not constantly repeating IPs... we are all wondering what is he up to really ? I think that this page should be kept updated with ideas on how to combat this and perhaps someone could also shed light on what it might be ? What is the reason for doing this "bounce-rate attack", as I call it? There was a similar question asked on stackoverflow earlier, with no meaningful answer - here - How to stop bounce rate manipulation.

    Read the article

  • Best ways to collect location-based user input

    - by user359650
    I'm working on a website where users will be able to register and provide information about their location. In order to prevent users from inputting incorrect data, we don't want users to provide free-text information but instead choose from predefined values as much as possible. We believe there are 2 ways of providing those values: use an API to an external service provider or create your own local database. APIs Some resources: - https://developers.facebook.com/docs/reference/ads-api/get-autocomplete-data/ - http://developer.yahoo.com/geo/geoplanet/ Pros: -accuracy and completeness of data. -no maintenance related to update of data as this it taken care of by API provider. -easier/faster to get started (no need to create local database, just implement API). Cons: -degradation of performance when availability issues with external API. -outage due to changes to the external API (until your code is updated to reflect those changes). -lock-in with external provider. Local database Some resources: - http://developer.yahoo.com/geo/geoplanet/data/ - http://www.maxmind.com/app/geolitecity - http://download.geonames.org/export/dump/ Pros: -no external dependency: improved stability and performance. Cons: -more work to get started (you need to create the database and code to interact with it). -risks of inaccurate/incomplete data, either initially or over time. -more maintenance work to keep database up to date. Assuming the depth information requested from users is as follows: -country: interested in value. also used to narrow down list of regions. -region (state in the US, county in the UK...): not interested in value itself, only used to narrow down list of cities. -city: interested in value (which can be used to work out related region should we need regional statistics). -address: interested in value although OPTIONAL. Which option (whether API or local database) would you choose? What tips you would give for the implementation? What other resources can you share?

    Read the article

  • O'Reilly deals to April 5, 2012 14:00 PT on books on "where"

    - by TATWORTH
    At http://shop.oreilly.com/category/deals/where-conference.do, O'Reilly are offering a series of books on geo-location at 50% off until April 5, 2012 14:00 PT. HTML5 Geolocation Truly revolutionary: now you can write geolocation applications directly in the browser, rather than develop native apps for particular devices. This concise book demonstrates the W3C Geolocation API in action, with code and examples to help you build HTML5 apps using the "write once, deploy everywhere" model. Along the way, you get a crash course in geolocation, browser support, and ways to integrate the API with common geo tools like Google Maps. HTML5 Cookbook With scores of practical recipes you can use in your projects right away, this cookbook helps you gain hands-on experience with HTML5’s versatile collection of elements. You get clear solutions for handling issues with everything from markup semantics, web forms, and audio and video elements to related technologies such as geolocation and rich JavaScript APIs. Each informative recipe includes sample code and a detailed discussion on why and how the solution works. Perfect for intermediate to advanced web and mobile web developers, this handy book lets you choose the HTML5 features that work for you—and helps you experiment with the rest. HTML5 Applications HTML5 is not just a replacement for plugins. It also makes the Web a first-class development environment by giving JavaScript programmers a solid foundation for building industrial-strength applications. This practical guide takes you beyond simple site creation and shows you how to build self-contained HTML5 applications that can run on mobile devices and compete with desktop apps. You’ll learn powerful JavaScript tools for exploiting HTML5 elements, and discover new methods for working with data, such as offline storage and multi-threaded processing. Complete with code samples, this book is ideal for experienced JavaScript and mobile developers alike. There are also other books being offered at a discount at http://shop.oreilly.com/category/deals/where-conference.do

    Read the article

  • 3 Trends for SMBs around Social, Mobile, and Sensor

    - by Socially_Aware_Enterprise
    While I often am talking to big companies or discussing enterprise solutions. There are times when individuals ask me about Small or Medium sized business trends.  Interestingly,  the Enterprise Social, Mobile, and Sensor initiatives I regularly discuss are in fact related to even the Mom and Pop storefront. The eco-system of new service players in the Social-Mobile-Sensor space generally emerge developing partnerships with enterprises as they develop and bring economy to scale to their services for the larger market. And of course Oracle has an entire division dedicated for delivering products and support to help emerging companies compete without the need to open an industrial strength credit line.. So here are some trends that we are helping large enterprises to deploy today, but small and medium businesses should be able to take advantage of by the end of this year and starting into 2015. 1) The typical small business is generally "Localized". But the ability to be "Hyper-Localized" will come as location based services become ubiquitous. Many small businesses have one or several storefronts and theirs are typically within a single regional economic footprint. While the internet provides global reach, it will be the businesses that invest in social, mobile and local that will win in the end.  Of course I am a huge SoMoLo evangelist. The SMBs' content and targeting with platforms for Geo-Fencing, Geo-Conquesting and Path-Matching to HHI are all going to be accessible to them, if not for Mobile Apps, then via Mobile messaging in Social Networks that offer it.. Expect to be able to target FaceBook messaging not by city, but by store or mall… This makes being able to be "Hyper-Local" even more important. And with new proximity services coming online more than ever before, SMBs will operate and service customers with pinpoint accuracy right down to where they stand in an aisle. Geo-Conquesting will be huge for small players to place ads when customers pass through competitors regions. Car Dealers are doing this now.. But also of course iBeacons are now very cheap and getting easier to put in retail stores. The ability for sales to happen anywhere in the store via a mobile phone or tablet is huge, as it will give the small shop the flexibility to not have to "Guard the Register" as more or most transactions will be digital. Thus, M-Commerce and T-Commerce will change the job of cashier dramatically.. 2) Intra-Brand Advocacy, the idea now is that rather than just depend on your trusty social media manager and his team, you are going to push more and more individuals with expertise inside the organization to help manage, reach-out, and utilize social channels to manage the incoming questions and answers customers need. While for years CRM was the tool of the enterprise, today CRMs enable this now "Salesforce et al" capability to trickle throughout the company. This gives greater pressure to organize roles, but also flatten out the organization. Internal collaboration around topics and customer needs is going to be the key for SMBs to finally get serious about customer experiences. Their customers are online and in social networks. This includes not just B2C SMBs but also B2B companies as well. Don't believe me? To find the players just use hashtag #SocialSelling and you will see… 3) The Visual Networks will begin to move from Content Aggregators to Content Collaboration platforms, which means Pinterest, Instagram, Vine, & others will begin to move to add more features brands want, first marketing platforms, rather than unique brand partnerships as they do today, but this will open ways for SMBs to engage with clear brand messaging and metrics. Eventually providing more "Collaboration" between Brand and Consumer.. Don't think for a minute Facebook bought Oculus Rift so you could see your timeline in 3-D. The Social Networks I advise customers to invest in are ones that are audio and visual intrinsically. Players from SoundCloud to Pinterest are deploying ways for brands to harness their interactive visual or audio based social networks to sell ad units aka brand messaging. While the Social Media revolution is going on, the emphasis was on the social, today it more and more about the media in social, that enterprises soon small and medium businesses will be connected to. 

    Read the article

  • High Availability for IaaS, PaaS and SaaS in the Cloud

    - by BuckWoody
    Outages, natural disasters and unforeseen events have proved that even in a distributed architecture, you need to plan for High Availability (HA). In this entry I'll explain a few considerations for HA within Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). In a separate post I'll talk more about Disaster Recovery (DR), since each paradigm has a different way to handle that. Planning for HA in IaaS IaaS involves Virtual Machines - so in effect, an HA strategy here takes on many of the same characteristics as it would on-premises. The primary difference is that the vendor controls the hardware, so you need to verify what they do for things like local redundancy and so on from the hardware perspective. As far as what you can control and plan for, the primary factors fall into three areas: multiple instances, geographical dispersion and task-switching. In almost every cloud vendor I've studied, to ensure your application will be protected by any level of HA, you need to have at least two of the Instances (VM's) running. This makes sense, but you might assume that the vendor just takes care of that for you - they don't. If a single VM goes down (for whatever reason) then the access to it is lost. Depending on multiple factors, you might be able to recover the data, but you should assume that you can't. You should keep a sync to another location (perhaps the vendor's storage system in another geographic datacenter or to a local location) to ensure you can continue to serve your clients. You'll also need to host the same VM's in another geographical location. Everything from a vendor outage to a network path problem could prevent your users from reaching the system, so you need to have multiple locations to handle this. This means that you'll have to figure out how to manage state between the geo's. If the system goes down in the middle of a transaction, you need to figure out what part of the process the system was in, and then re-create or transfer that state to the second set of systems. If you didn't write the software yourself, this is non-trivial. You'll also need a manual or automatic process to detect the failure and re-route the traffic to your secondary location. You could flip a DNS entry (if your application can tolerate that) or invoke another process to alias the first system to the second, such as load-balancing and so on. There are many options, but all of them involve coding the state into the application layer. If you've simply moved a state-ful application to VM's, you may not be able to easily implement an HA solution. Planning for HA in PaaS Implementing HA in PaaS is a bit simpler, since it's built on the concept of stateless applications deployment. Once again, you need at least two copies of each element in the solution (web roles, worker roles, etc.) to remain available in a single datacenter. Also, you need to deploy the application again in a separate geo, but the advantage here is that you could work out a "shared storage" model such that state is auto-balanced across the world. In fact, you don't have to maintain a "DR" site, the alternate location can be live and serving clients, and only take on extra load if the other site is not available. In Windows Azure, you can use the Traffic Manager service top route the requests as a type of auto balancer. Even with these benefits, I recommend a second backup of storage in another geographic location. Storage is inexpensive; and that second copy can be used for not only HA but DR. Planning for HA in SaaS In Software-as-a-Service (such as Office 365, or Hadoop in Windows Azure) You have far less control over the HA solution, although you still maintain the responsibility to ensure you have it. Since each SaaS is different, check with the vendor on the solution for HA - and make sure you understand what they do and what you are responsible for. They may have no HA for that solution, or pin it to a particular geo, or perhaps they have a massive HA built in with automatic load balancing (which is often the case).   All of these options (with the exception of SaaS) involve higher costs for the design. Do not sacrifice reliability for cost - that will always cost you more in the end. Build in the redundancy and HA at the very outset of the project - if you try to tack it on later in the process the business will push back and potentially not implement HA. References: http://www.bing.com/search?q=windows+azure+High+Availability  (each type of implementation is different, so I'm routing you to a search on the topic - look for the "Patterns and Practices" results for the area in Azure you're interested in)

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • UTF-8 formatting in SPARQL

    - by john
    How can I "say" to SPARQL that ?churchname is in UTF-8 formatting? because response is like:Pražský hrad PREFIX lgv: <http://linkedgeodata.org/vocabulary#> PREFIX abc: <http://dbpedia.org/class/yago/> SELECT ?churchname WHERE { <http://dbpedia.org/resource/Prague> geo:geometry ?gm . ?church a lgv:castle . ?church geo:geometry ?churchgeo . ?church lgv:name ?churchname . FILTER ( bif:st_intersects (?churchgeo,?gm, 10)) } GROUP BY ?churchname ORDER BY ?churchname

    Read the article

  • Unable to start Android emulator > 1.5

    - by Cicatrice
    Hi ! I got this trace when I tried to launch android 1.6 or 2.1. Android 1.5 is working fine. I tried to reinstall each SDK, but there is no way to get it working. I created those AVD with Eclipse plugin. geo@geo-laptop:~> android/android-sdk-linux_86/tools/emulator -avd a16 *** glibc detected *** android/android-sdk-linux_86/tools/emulator: free(): invalid pointer: 0x45454545 *** ======= Backtrace: ========= /lib/libc.so.6(+0x6df7b)[0xb748cf7b] /lib/libc.so.6(cfree+0xd9)[0xb7491ac9] android/android-sdk-linux_86/tools/emulator[0x80db20c] android/android-sdk-linux_86/tools/emulator[0x840eb38] ======= Memory map: ======== 08048000-08246000 r-xp 00000000 08:06 5693701 /home/geo/android/android-sdk-linux_86/tools/emulator 08246000-08249000 rw-p 001fd000 08:06 5693701 /home/geo/android/android-sdk-linux_86/tools/emulator 08249000-08445000 rw-p 00000000 00:00 0 08445000-08447000 rwxp 00000000 00:00 0 08447000-0874c000 rw-p 00000000 00:00 0 [heap] ad8e9000-ada86000 rw-s 00000000 00:04 85229580 /SYSV00000000 (deleted) ada86000-adced000 rw-p 00000000 00:00 0 adced000-add0d000 rw-s 00000000 00:04 84770825 /SYSV0056a4d6 (deleted) add0d000-adde4000 r-xp 00000000 08:05 22591 /usr/lib/libasound.so.2.0.0 adde4000-adde5000 ---p 000d7000 08:05 22591 /usr/lib/libasound.so.2.0.0 adde5000-adde8000 r--p 000d7000 08:05 22591 /usr/lib/libasound.so.2.0.0 adde8000-adde9000 rw-p 000da000 08:05 22591 /usr/lib/libasound.so.2.0.0 adde9000-ade09000 rw-s 00000000 00:05 3268 /dev/snd/pcmC0D0p ade09000-b3e0b000 rw-p 00000000 00:00 0 b3e0b000-b3e0c000 ---p 00000000 00:00 0 b3e0c000-b55cd000 rw-p 00000000 00:00 0 b55cd000-b6dcd000 rwxp 00000000 00:00 0 b6dcd000-b6ea3000 rw-p 00000000 00:00 0 b6ea4000-b7205000 rw-p 00000000 00:00 0 b7205000-b7209000 r-xp 00000000 08:05 22491 /usr/lib/libXfixes.so.3.1.0 b7209000-b720a000 r--p 00003000 08:05 22491 /usr/lib/libXfixes.so.3.1.0 b720a000-b720b000 rw-p 00004000 08:05 22491 /usr/lib/libXfixes.so.3.1.0 b7212000-b7222000 rw-s 00000000 00:05 3269 /dev/snd/pcmC0D0c b7222000-b7226000 r-xp 00000000 08:05 22588 /usr/lib/alsa-lib/libasound_module_rate_speexrate.so b7226000-b7227000 r--p 00003000 08:05 22588 /usr/lib/alsa-lib/libasound_module_rate_speexrate.so b7227000-b7228000 rw-p 00004000 08:05 22588 /usr/lib/alsa-lib/libasound_module_rate_speexrate.so b7228000-b7229000 rw-s 81000000 00:05 3268 /dev/snd/pcmC0D0p b7229000-b722a000 r--s 80000000 00:05 3268 /dev/snd/pcmC0D0p b722a000-b722b000 rw-s 00000000 00:04 84738056 /SYSV0056a4d5 (deleted) b722b000-b7276000 r--p 00000000 08:05 85242 /var/cache/libx11/compose/l4_024_313cb605_00280cc0 b7276000-b72b5000 r--p 00000000 08:05 20724 /usr/lib/locale/en_US.utf8/LC_CTYPE b72b5000-b73d2000 r--p 00000000 08:05 101088 /usr/lib/locale/en_US.utf8/LC_COLLATE b73d2000-b73d9000 r-xp 00000000 08:05 22991 /usr/lib/libXrandr.so.2.2.0 b73d9000-b73da000 r--p 00006000 08:05 22991 /usr/lib/libXrandr.so.2.2.0 b73da000-b73db000 rw-p 00007000 08:05 22991 /usr/lib/libXrandr.so.2.2.0 b73db000-b73e4000 r-xp 00000000 08:05 4146 /usr/lib/libXrender.so.1.3.0 b73e4000-b73e5000 r--p 00008000 08:05 4146 /usr/lib/libXrender.so.1.3.0 b73e5000-b73e6000 rw-p 00009000 08:05 4146 /usr/lib/libXrender.so.1.3.0 b73e6000-b73f7000 r-xp 00000000 08:05 3705 /usr/lib/libXext.so.6.4.0 b73f7000-b73f8000 r--p 00010000 08:05 3705 /usr/lib/libXext.so.6.4.0 b73f8000-b73f9000 rw-p 00011000 08:05 3705 /usr/lib/libXext.so.6.4.0 b73f9000-b73fa000 rw-p 00000000 00:00 0 b73fa000-b73fc000 r-xp 00000000 08:05 8573 /usr/lib/libXau.so.6.0.0 b73fc000-b73fd000 r--p 00001000 08:05 8573 /usr/lib/libXau.so.6.0.0 b73fd000-b73fe000 rw-p 00002000 08:05 8573 /usr/lib/libXau.so.6.0.0 b73fe000-b73ff000 rw-p 00000000 00:00 0 b73ff000-b741d000 r-xp 00000000 08:05 3862 /usr/lib/libxcb.so.1.1.0 b741d000-b741e000 r--p 0001d000 08:05 3862 /usr/lib/libxcb.so.1.1.0 b741e000-b741f000 rw-p 0001e000 08:05 3862 /usr/lib/libxcb.so.1.1.0 b741f000-b7583000 r-xp 00000000 08:05 39690 /lib/libc-2.11.1.so b7583000-b7584000 ---p 00164000 08:05 39690 /lib/libc-2.11.1.so b7584000-b7586000 r--p 00164000 08:05 39690 /lib/libc-2.11.1.so b7586000-b7587000 rw-p 00166000 08:05 39690 /lib/libc-2.11.1.so b7587000-b758a000 rw-p 00000000 00:00 0 b758a000-b75a6000 r-xp 00000000 08:05 11519 /lib/libgcc_s.so.1 b75a6000-b75a7000 r--p 0001b000 08:05 11519 /lib/libgcc_s.so.1 b75a7000-b75a8000 rw-p 0001c000 08:05 11519 /lib/libgcc_s.so.1 b75a8000-b768b000 r-xp 00000000 08:05 85419 /usr/lib/libstdc++.so.6.0.14 b768b000-b768c000 ---p 000e3000 08:05 85419 /usr/lib/libstdc++.so.6.0.14 b768c000-b7690000 r--p 000e3000 08:05 85419 /usr/lib/libstdc++.so.6.0.14 b7690000-b7691000 rw-p 000e7000 08:05 85419 /usr/lib/libstdc++.so.6.0.14 b7691000-b7698000 rw-p 00000000 00:00 0 b7698000-b76c0000 r-xp 00000000 08:05 39698 /lib/libm-2.11.1.so b76c0000-b76c1000 r--p 00027000 08:05 39698 /lib/libm-2.11.1.so b76c1000-b76c2000 rw-p 00028000 08:05 39698 /lib/libm-2.11.1.so b76c2000-b76d9000 r-xp 00000000 08:05 39716 /lib/libpthread-2.11.1.so b76d9000-b76da000 r--p 00016000 08:05 39716 /lib/libpthread-2.11.1.so b76da000-b76db000 rw-p 00017000 08:05 39716 /lib/libpthread-2.11.1.so b76db000-b76de000 rw-p 00000000 00:00 0 b76de000-b76e1000 r-xp 00000000 08:05 39696 /lib/libdl-2.11.1.so b76e1000-b76e2000 r--p 00002000 08:05 39696 /lib/libdl-2.11.1.so b76e2000-b76e3000 rw-p 00003000 08:05 39696 /lib/libdl-2.11.1.so b76e3000-b76eb000 r-xp 00000000 08:05 39720 /lib/librt-2.11.1.so b76eb000-b76ec000 r--p 00007000 08:05 39720 /lib/librt-2.11.1.so b76ec000-b76ed000 rw-p 00008000 08:05 39720 /lib/librt-2.11.1.so b76ed000-b76ef000 r-xp 00000000 08:05 39725 /lib/libutil-2.11.1.so b76ef000-b76f0000 r--p 00001000 08:05 39725 /lib/libutil-2.11.1.so b76f0000-b76f1000 rw-p 00002000 08:05 39725 /lib/libutil-2.11.1.so b76f1000-b7828000 r-xp 00000000 08:05 4550 /usr/lib/libX11.so.6.3.0 b7828000-b7829000 r--p 00136000 08:05 4550 /usr/lib/libX11.so.6.3.0 b7829000-b782c000 rw-p 00137000 08:05 4550 /usr/lib/libX11.so.6.3.0 b782c000-b782d000 rw-s 81000000 00:05 3269 /dev/snd/pcmC0D0c b782d000-b782e000 r--s 80000000 00:05 3269 /dev/snd/pcmC0D0c b782e000-b782f000 rw-s 00000000 00:04 82771979 /SYSV0056a4d7 (deleted) b782f000-b7839000 r-xp 00000000 08:05 22208 /usr/lib/libXcursor.so.1.0.2 b7839000-b783a000 r--p 00009000 08:05 22208 /usr/lib/libXcursor.so.1.0.2 b783a000-b783b000 rw-p 0000a000 08:05 22208 /usr/lib/libXcursor.so.1.0.2 b783b000-b783c000 r--p 00000000 08:05 20194 /usr/lib/locale/en_US.utf8/LC_NUMERIC b783c000-b783d000 r--p 00000000 08:05 100190 /usr/lib/locale/en_US.utf8/LC_TIME b783d000-b783e000 r--p 00000000 08:05 100189 /usr/lib/locale/en_US.utf8/LC_MONETARY[1] 24082 abort android/android-sdk-linux_86/tools/emulator -avd a16

    Read the article

  • How to perform gui operation in doInBackground method?

    - by jM2.me
    My application reads a user selected file which contains addresses and then displays on mapview when done geocoding. To avoid hanging app the importing and geocoding is done in AsyncTask. public class LoadOverlayAsync extends AsyncTask<Uri, Integer, StopsOverlay> { Context context; MapView mapView; Drawable drawable; public LoadOverlayAsync(Context con, MapView mv, Drawable dw) { context = con; mapView = mv; drawable = dw; } protected StopsOverlay doInBackground(Uri... uris) { StringBuilder text = new StringBuilder(); StopsOverlay stopsOverlay = new StopsOverlay(drawable, context); Geocoder geo = new Geocoder(context, Locale.US); try { File file = new File(new URI(uris[0].toString())); BufferedReader br = new BufferedReader(new FileReader(file)); String line; while ((line = br.readLine()) != null) { StopOverlay stopOverlay = null; String[] tempLine = line.split("~"); List<Address> results = geo.getFromLocationName(tempLine[4] + " " + tempLine[5] + " " + tempLine[7] + " " + tempLine[8], 10); if (results.size() > 0) { Toast progressToast = Toast.makeText(context, "More than one yo", 1000); progressToast.show(); } else if (results.size() == 1) { Address addr = results.get(0); GeoPoint mPoint = new GeoPoint((int)(addr.getLatitude() * 1E6), (int)(addr.getLongitude() * 1E6)); stopOverlay = new StopOverlay(mPoint, tempLine); } if (stopOverlay != null) { stopsOverlay.addOverlay(stopOverlay); } //List<Address> results = geo.getFromLocationName(locationName, maxResults) } } catch (URISyntaxException e) { showErrorToast(e.toString()); //e.printStackTrace(); } catch (FileNotFoundException e) { showErrorToast(e.toString()); //e.printStackTrace(); } catch (IOException e) { showErrorToast(e.toString()); //e.printStackTrace(); } return stopsOverlay; } protected void onProgressUpdate(Integer... progress) { Toast progressToast = Toast.makeText(context, "Loaded " + progress.toString(), 1000); progressToast.show(); } protected void onPostExecute(StopsOverlay so) { //mapView.getOverlays().add(so); Toast progressToast = Toast.makeText(context, "Done geocoding", 1000); progressToast.show(); } protected void showErrorToast(String msg) { Toast Newtoast = Toast.makeText(context, msg, 10000); Newtoast.show(); } } But if geocode fails, I want a dialog popup to let user edit the address. That would require calling on gui method while in doInBackground. What would be a good workaround this?

    Read the article

  • LINQ to Twitter v2.0.8 Released

    - by Joe Mayo
    Today, I released LINQ to Twitter v2.0.8. Besides normal maintenance, this release includes the Twitter Geo API and the Suggested Users API. LINQ to Twitter is hosted on CodePlex.com: http://linqtotwitter.codeplex.com/ In addition to new functionality, I've made much progress toward LINQ to Twitter documentation; primarily in the Making API Calls area: http://linqtotwitter.codeplex.com/wikipage?title=Making%20API%20Calls&referringTitle=Documentation There's also a discussion forum where you can ask and view questions: http://linqtotwitter.codeplex.com/Thread/List.aspx As always, constructive feedback is welcome. Joe

    Read the article

  • Amazing Video - Watch the speech, humor, vision, belief, spontaneity

    - by Manish Agrawal
    Most amazing part of this video is the sense of humor of this gentleman from Lunds University..How learned he must have been at this age, to put examples, new technologies, vision, geo-politics and so many things mixed so nicely on the fly and still give a clear message with humor..About Steve: what a man, in 1985 he was saying these things... as if he did a time travel to 2010 and then was explaining, how computers will influence humankind..  wow..

    Read the article

  • Google I/O 2010 - Moving beyond markers: Advanced Maps API customization

    Google I/O 2010 - Moving beyond markers: Advanced Maps API customization Google I/O 2010 - Moving beyond markers: Advanced Maps API customization Geo 301 Jez Fletcher, David Day With such a large number of Google Maps API sites online, it can be hard to make your site stand out from the crowd. This session covers ways in which you can enhance your Maps API application to truly differentiate it, including customizing your overlays, controls, and map. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 16 0 ratings Time: 36:38 More in Science & Technology

    Read the article

  • Google I/O 2010 - The SketchUp 3D API

    Google I/O 2010 - The SketchUp 3D API Google I/O 2010 - The SketchUp 3D API: Working with 3D geospatial data Geo 201 Matt Lowrie The world is a three dimensional space. Your geospatial applications should be showing it that way. This session will show how to create 3D data in Building Maker and then use the SketchUp API to customize that data to fit your needs. For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 17 0 ratings Time: 58:28 More in Science & Technology

    Read the article

  • WebCenter Customer Spotlight: Hitachi Data Systems

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter Watch this Webcast to see a live demo on how HDS creates multilingual content for their 35+ regional websites  Solution SummaryHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. HDS is based in Santa Clara, California, and has over 5,300 employees in more then 100 countries and regions. HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. HDS implemented a global web content management system based on Oracle WebCenter Content and integrated the Lingotek translation management system to manage their multilingual content. The implemented solution provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Company OverviewHitachi Data Systems (HDS) provides mid-range and high-end storage systems, software and services. It is a wholly owned subsidiary of Hitachi Ltd. and part of the Hitachi Information Systems & Telecommunications Division. The company sells through direct and indirect channels in more than 170 countries and regions. Its customers include of 50 percent of the Fortune 100 companies. HDS is based in Santa Clara California and has over 5,300 employees in more than 100 countries and regions. Business ChallengesHDS has over 35 global websites and the lack of global web capabilities led to inconsistency of messaging, slower time to market and failed to address local language needs. There was an extensive operational overhead due to manual and redundant processes. Translation efforts where superficial, inconsistent and wasteful and the lack of translation automation tools discouraged localization.  HDS's main objectives were to provide a consistent message across all their sites, to maintain a tight governance structure across their messages and related content, expand the use of the existing content management systems and implement a centralized translation management system. Solution DeployedHDS implemented a global web content management system based on Oracle WebCenter Content. The solution supports decentralized publishing for their 35+ global sites to address local market needs while ensuring editorial and brand review trough embedded review processes. They integrated the Lingotek translation management system into Oracle WebCenter Content to manage their multilingual content. Business Results Provides each Geo with the ability to expand their web offering to meet local market needs, while staying aligned with the Corporate Web Guidelines Enables end-to-end content lifecycle management across multiple languages Leverage translation memory for reuse and consistency Reduce time to market with central repository of translated content Additional Information HDS Webcast Oracle WebCenter Content Lingotek website

    Read the article

  • finding houses within a radius

    - by paul smith
    During an interview I was asked given the following: A real estate application that lists all houses that are currently on the market (i.e., for sale) within a given distance (say for example the user wants to find all houses within 20 miles), how would you design your application (both data structure and alogirithm) to build this type of service? Any ideas? How would you implement it? I told him I didn't know becaue I've never done any geo-related stuff before.

    Read the article

  • Maps We Like, and Why We Like Them

    Maps We Like, and Why We Like Them Live from Sydney (now in HD!) Paul and Chris talk about their favorite maps, why we like them, and how we find cool maps. 1:40 Showcase | 5:45 Geo Developer Blog | 8:25 GTA4 Street View map | 11:00 Internet Map | 14:40 How we find cool maps | 20:30 Map of the Dead | 24:50 Old Maps Online | 27:10 Wind Map From: GoogleDevelopers Views: 3 0 ratings Time: 29:18 More in Science & Technology

    Read the article

  • More Than a Map #morethanamap

    More Than a Map #morethanamap Morethanamap.com also features stories from our community of developers who are using the Google Maps API to start businesses, help improve their communities or save the environment. Starting next week we'll showcase these stories weekly on the Geo Developers Blog. And follow us on Google+ to learn more. From: GoogleDevelopers Views: 305 49 ratings Time: 01:48 More in Science & Technology

    Read the article

  • Maps API payante : Google s'explique, environ un site sur 300 sera concerné après 90 jours consécutifs de dépassement de quota

    Maps API payante : Google s'explique Environ un site sur 300 sera concerné après 90 jours consécutifs de dépassement de quota Mise à jour du 25 novembre 2011 Google entend facturer aux sites le surplus d'utilisation de l'API Google Maps (lire ci-devant). Son annonce laissait toutefois plusieurs zones d'ombre que nos lecteurs n'ont pas manqué de signaler. Un nouveau billet du blog Geo Developers vient apporter quelques clarifications, avec quelques informations clés : Des sites peuvent dépasser épisodiquement les 25 000 chargements de cartes quotidien sans être concernés par le paiement. Ils doivent en q...

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >