Search Results

Search found 359 results on 15 pages for 'spatial subdivision'.

Page 12/15 | < Previous Page | 8 9 10 11 12 13 14 15  | Next Page >

  • Hostapd - WLAN as AP

    - by BBK
    I'm trying to start hostapd but without success. I'm using Headless Ubuntu 11.10 oneiric 3.0.0-16-server x86_64. WLAN driver is rt2800usb and my wireless nic card TP-Link TL-WN727N supports AP mode as shows below: us0# ifconfig wlan0 wlan0 Link encap:Ethernet HWaddr 00:27:19:be:cd:b6 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) us0# lsusb Bus 003 Device 003: ID 148f:3070 Ralink Technology, Corp. RT2870/RT3070 Wireless Adapter us0# lshw -C network *-network:3 description: Wireless interface physical id: 4 bus info: usb@3:2 logical name: wlan0 serial: 00:27:19:be:cd:b6 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2800usb driverversion=3.0.0-16-server firmware=0.29 link=no multicast=yes wireless=IEEE 802.11bgn us0# hostapd /etc/hostapd/hostapd.conf Configuration file: /etc/hostapd/hostapd.conf Could not read interface wlan0 # The int flags: No such device nl80211 driver initialization failed. ELOOP: remaining socket: sock=4 eloop_data=0xd3e4a0 user_data=0xd3ecc0 handler=0x433880 ELOOP: remaining socket: sock=6 eloop_data=0xd411f0 user_data=(nil) handler=0x43cc10 us0# cat /etc/hostapd/hostapd.conf ssid=Home interface=wlan0 # The interface name of the card #driver=rt2800usb driver=nl80211 macaddr_acl=0 ieee80211n=1 channel=1 hw_mode=g auth_algs=1 ignore_broadcast_ssid=0 wpa=2 wpa_passphrase=88888888 wpa_key_mgmt=WPA-PSK wpa_pairwise=TKIP rsn_pairwise=CCMP us0# iw list Wiphy phy0 Band 1: Capabilities: 0x172 HT20/HT40 Static SM Power Save RX Greenfield RX HT20 SGI RX HT40 SGI RX STBC 1-stream Max AMSDU length: 7935 bytes No DSSS/CCK HT40 Maximum RX AMPDU length 65535 bytes (exponent: 0x003) Minimum RX AMPDU time spacing: 2 usec (0x04) HT RX MCS rate indexes supported: 0-7, 32 TX unequal modulation not supported HT TX Max spatial streams: 1 HT TX MCS rate indexes supported may differ Frequencies: * 2412 MHz [1] (20.0 dBm) * 2417 MHz [2] (20.0 dBm) * 2422 MHz [3] (20.0 dBm) * 2427 MHz [4] (20.0 dBm) * 2432 MHz [5] (20.0 dBm) * 2437 MHz [6] (20.0 dBm) * 2442 MHz [7] (20.0 dBm) * 2447 MHz [8] (20.0 dBm) * 2452 MHz [9] (20.0 dBm) * 2457 MHz [10] (20.0 dBm) * 2462 MHz [11] (20.0 dBm) * 2467 MHz [12] (20.0 dBm) (passive scanning, no IBSS) * 2472 MHz [13] (20.0 dBm) (passive scanning, no IBSS) * 2484 MHz [14] (20.0 dBm) (passive scanning, no IBSS) Bitrates (non-HT): * 1.0 Mbps * 2.0 Mbps (short preamble supported) * 5.5 Mbps (short preamble supported) * 11.0 Mbps (short preamble supported) * 6.0 Mbps * 9.0 Mbps * 12.0 Mbps * 18.0 Mbps * 24.0 Mbps * 36.0 Mbps * 48.0 Mbps * 54.0 Mbps max # scan SSIDs: 4 Supported interface modes: * IBSS * managed * AP * AP/VLAN * WDS * monitor * mesh point Supported commands: * new_interface * set_interface * new_key * new_beacon * new_station * new_mpath * set_mesh_params * set_bss * authenticate * associate * deauthenticate * disassociate * join_ibss * Unknown command (68) * Unknown command (55) * Unknown command (57) * Unknown command (59) * Unknown command (67) * set_wiphy_netns * Unknown command (65) * Unknown command (66) * connect * disconnect The question is: Why the hostapd not starting?

    Read the article

  • Why would Copying a Large Image to the Clipboard Freeze a Computer?

    - by Akemi Iwaya
    Sometimes, something really odd happens when using our computers that makes no sense at all…such as copying a simple image to the clipboard and the computer freezing up because of it. An image is an image, right? Today’s SuperUser post has the answer to a puzzled reader’s dilemna. Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. Original image courtesy of Wikimedia. The Question SuperUser reader Joban Dhillon wants to know why copying an image to the clipboard on his computer freezes it up: I was messing around with some height map images and found this one: (http://upload.wikimedia.org/wikipedia/commons/1/15/Srtm_ramp2.world.21600×10800.jpg) The image is 21,600*10,800 pixels in size. When I right click and select “Copy Image” in my browser (I am using Google Chrome), it slows down my computer until it freezes. After that I must restart. I am curious about why this happens. I presume it is the size of the image, although it is only about 6 MB when saved to my computer. I am also using Windows 8.1 Why would a simple image freeze Joban’s computer up after copying it to the clipboard? The Answer SuperUser contributor Mokubai has the answer for us: “Copy Image” is copying the raw image data, rather than the image file itself, to your clipboard. The raw image data will be 21,600 x 10,800 x 3 (24 bit image) = 699,840,000 bytes of data. That is approximately 700 MB of data your browser is trying to copy to the clipboard. JPEG compresses the raw data using a lossy algorithm and can get pretty good compression. Hence the compressed file is only 6 MB. The reason it makes your computer slow is that it is probably filling your memory up with at least the 700 MB of image data that your browser is using to show you the image, another 700 MB (along with whatever overhead the clipboard incurs) to store it on the clipboard, and a not insignificant amount of processing power to convert the image into a format that can be stored on the clipboard. Chances are that if you have less than 4 GB of physical RAM, then those copies of the image data are forcing your computer to page memory out to the swap file in an attempt to fulfil both memory demands at the same time. This will cause programs and disk access to be sluggish as they use the disk and try to use the data that may have just been paged out. In short: Do not use the clipboard for huge images unless you have a lot of memory and a bit of time to spare. Like pretty graphs? This is what happens when I load that image in Google Chrome, then copy it to the clipboard on my machine with 12 GB of RAM: It starts off at the lower point using 2.8 GB of RAM, loading the image punches it up to 3.6 GB (approximately the 700 MB), then copying it to the clipboard spikes way up there at 6.3 GB of RAM before settling back down at the 4.5-ish you would expect to see for a program and two copies of a rather large image. That is a whopping 3.7 GB of image data being worked on at the peak, which is probably the initial image, a reserved quantity for the clipboard, and perhaps a couple of conversion buffers. That is enough to bring any machine with less than 8 GB of RAM to its knees. Strangely, doing the same thing in Firefox just copies the image file rather than the image data (without the scary memory surge). Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.

    Read the article

  • Why Is Hibernation Still Used?

    - by Jason Fitzpatrick
    With the increased prevalence of fast solid-state hard drives, why do we still have system hibernation? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Moses wants to know why he should use hibernate on a desktop machine: I’ve never quite understood the original purpose of the Hibernation power state in Windows. I understand how it works, what processes take place, and what happens when you boot back up from Hibernate, but I’ve never truly understood why it’s used. With today’s technology, most notably with SSDs, RAM and CPUs becoming faster and faster, a cold boot on a clean/efficient Windows installation can be pretty fast (for some people, mere seconds from pushing the power button). Standby is even faster, sometimes instantaneous. Even SATA drives from 5-6 years ago can accomplish these fast boot times. Hibernation seems pointless to me [on desktop computers] when modern technology is considered, but perhaps there are applications that I’m not considering. What was the original purpose behind hibernation, and why do people still use it? Quite a few people use hibernate, so what is Moses missing in the big picture? The Answer SuperUser contributor Vignesh4304 writes: Normally hibernate mode saves your computer’s memory, this includes for example open documents and running applications, to your hard disk and shuts down the computer, it uses zero power. Once the computer is powered back on, it will resume everything where you left off. You can use this mode if you won’t be using the laptop/desktop for an extended period of time, and you don’t want to close your documents. Simple Usage And Purpose: Save electric power and resuming of documents. In simple terms this comment serves nice e.g (i.e. you will sleep but your memories are still present). Why it’s used: Let me describe one sample scenario. Imagine your battery is low on power in your laptop, and you are working on important projects on your machine. You can switch to hibernate mode – it will result your documents being saved, and when you power on, the actual state of application gets restored. Its main usage is like an emergency shutdown with an auto-resume of your documents. MagicAndre1981 highlights the reason we use hibernate everyday: Because it saves the status of all running programs. I leave all my programs open and can resume working the next day very easily. Doing a real boot would require to start all programs again, load all the same files into those programs, get to the same place that I was at before, and put all my windows in exactly the same place. Hibernating saves a lot of work pulling these things back up again. It’s not unusual to find computers around the office here that have been hibernated day in and day out for months without an actual full system shutdown and restart. It’s enormously convenient to freeze your work space at the exact moment you stopped working and to turn right around and resume there the next morning. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • Java @Contented annotation to help reduce false sharing

    - by Dave
    See this posting by Aleksey Shipilev for details -- @Contended is something we've wanted for a long time. The JVM provides automatic layout and placement of fields. Usually it'll (a) sort fields by descending size to improve footprint, and (b) pack reference fields so the garbage collector can process a contiguous run of reference fields when tracing. @Contended gives the program a way to provide more explicit guidance with respect to concurrency and false sharing. Using this facility we can sequester hot frequently written shared fields away from other mostly read-only or cold fields. The simple rule is that read-sharing is cheap, and write-sharing is very expensive. We can also pack fields together that tend to be written together by the same thread at about the same time. More generally, we're trying to influence relative field placement to minimize coherency misses. Fields that are accessed closely together in time should be placed proximally in space to promote cache locality. That is, temporal locality should condition spatial locality. Fields accessed together in time should be nearby in space. That having been said, we have to be careful to avoid false sharing and excessive invalidation from coherence traffic. As such, we try to cluster or otherwise sequester fields that tend to written at approximately the same time by the same thread onto the same cache line. Note that there's a tension at play: if we try too hard to minimize single-threaded capacity misses then we can end up with excessive coherency misses running in a parallel environment. Theres no single optimal layout for both single-thread and multithreaded environments. And the ideal layout problem itself is NP-hard. Ideally, a JVM would employ hardware monitoring facilities to detect sharing behavior and change the layout on the fly. That's a bit difficult as we don't yet have the right plumbing to provide efficient and expedient information to the JVM. Hint: we need to disintermediate the OS and hypervisor. Another challenge is that raw field offsets are used in the unsafe facility, so we'd need to address that issue, possibly with an extra level of indirection. Finally, I'd like to be able to pack final fields together as well, as those are known to be read-only.

    Read the article

  • MySQL Workbench 6.2.1 BETA has been released

    - by user12602715
    The MySQL Workbench team is announcing availability of the first beta release of its upcoming major product update, MySQL  Workbench 6.2. MySQL Workbench 6.2 focuses on support for innovations released in MySQL 5.6 and MySQL 5.7 DMR (Development Release) as well as MySQL Fabric 1.5, with features such as: A new spatial data viewer, allowing graphical views of result sets containing GEOMETRY data and taking advantage of the new GIS capabilities in MySQL 5.7. Support for new MySQL 5.7.4 SQL syntax and configuration options. Metadata Locks View shows the locks connections are blocked or waiting on. MySQL Fabric cluster connectivity - Browsing, view status, and connect to any MySQL instance in a Fabric Cluster. MS Access migration Wizard - easily move to MySQL Databases. Other significant usability improvements were made, aiming to raise productivity for advanced and new users: Direct shortcut buttons to commonly used features in the schema tree. Improved results handling. Columns have better auto-sizing and their widths are saved. Fonts can also be customized. Results "pinned" to persist viewing data. A convenient Run SQL Script command to directly execute SQL scripts, without loading them first. Database Modeling has been updated to allow changes to the formatting of note objects and attached SQL scripts can now be included in forward engineering and synchronization scripts. Integrated Visual Explain within the result set panel. Visual Explain drill down for large to very large explain plans. Shared SQL snippets in the SQL Editor, allowing multiple users to share SQL code by storing it within a MySQL instance. And much more. The list of provided binaries was updated and MySQL Workbench binaries now available for: Windows 7 or newer Mac OS X Lion or newer Ubuntu 12.04 LTS and Ubuntu 14.04 Fedora 20 Oracle Linux 6.5 Oracle Linux 7 Sources for building in other Linux distributions For the full list of changes in this revision, visit http://dev.mysql.com/doc/relnotes/workbench/en/changes-6-2.html For discussion, join the MySQL Workbench Forums: http://forums.mysql.com/index.php?151 Download MySQL Workbench 6.2.1 now, for Windows, Mac OS X 10.7+, Oracle Linux 6 and 7, Fedora 20, Ubuntu 12.04 and Ubuntu 14.04 or sources, from: http://dev.mysql.com/downloads/tools/workbench/ On behalf of the MySQL Workbench and the MySQL/ORACLE RE Team.

    Read the article

  • Are there software options (preferabbly .NET) for doing distance and speed analysis of footballers moving on video?

    - by Anonymous Type
    Editing Question for Clarity Thanks for feedback so far, very insightful. I'm not sure how far along this part of the software community is, and what if any libraries exist for me to leverage from. Heres what I'm trying to do. Problem: Take an existing video of a game of rugby league. The Rugby League field is 100 metres long, 70 metres wide, and has white line markings every 10 metres running along the width of the field, as well as along the sidelines. Each side has 13 players on the field. Players on each team have identical jerseys that normally constrast strongly against background colours (green/brown field colour) and the referee's colour (usually yellow) and the designated water runner (orange). All players have a unique number in thick white lettering on their backs for identification. Video is taken with a high definition camera. Currently only one camera is used (2D) and existing video does not contain a foreground object of fixed spatial dimensions (as suggested in one answer for comparision measurements, however I could add this to future filming sessions if it is worthwhile). The player's do not run in a straight line 50% of the time but will go sideways on on a diagonal to the play the ball. The distance measured always starts from the spot of the previous "tackle", which ends where the player stops forward movement. It is not always possible to determine the players number from the video (facing other direction, sunlight, others standing in the way of the camera). But this isn't important as the software could allow for manual inputting of unknown "runs" at a later point after analysis. Determine the distance between two points (i.e. where the player started his "run" and where he finished it). I'm guessing that this would be quite doable if I manually marked the start and end point in the video. But how would I use landmarks in the background to determine the distance (assuming the person taking the video has kept it from jerking around). Question: Do software packages or libraries exist that are specialised enough to assist with writing analysis software to determine a sports persons distance travelled based on video taken of the performance?

    Read the article

  • rails semi-complex STI with ancestry data model planning the routes and controllers

    - by ere
    I'm trying to figure out the best way to manage my controller(s) and models for a particular use case. I'm building a review system where a User may build a review of several distinct types with a Polymorphic Reviewable. Country (has_many reviews & cities) Subdivision/State (optional, sometimes it doesnt exist, also reviewable, has_many cities) City (has places & review) Burrow (optional, also reviewable ex: Brooklyn) Neighborhood (optional & reviewable, ex: williamsburg) Place (belongs to city) I'm also wondering about adding more complexity. I also want to include subdivisions occasionally... ie for the US, I might add Texas or for Germany, Baveria and have it be reviewable as well but not every country has regions and even those that do might never be reviewed. So it's not at all strict. I would like it to as simple and flexible as possible. It'd kinda be nice if the user could just land on one form and select either a city or a country, and then drill down using data from say Foursquare to find a particular place in a city and make a review. I'm really not sure which route I should take? For example, what happens if I have a Country, and a City... and then I decide to add a Burrow? Could I give places tags (ie Williamsburg, Brooklyn) belong_to NY City and the tags belong to NY? Tags are more flexible and optionally explain what areas they might be in, the tags belong to a city, but also have places and be reviewable? So I'm looking for suggestions for anyone who's done something related. Using Rails 3.2, and mongoid.

    Read the article

  • Alternative method of viewing a database diagram in SQL Server to see what tables have gone missing?

    - by Triynko
    I have a database diagram for my database, but when I open it in SQL Server, I almost immediately get a message saying some permissions changed or tables in the diagram were dropped or renamed, and tables in the diagram vanish before I can even scroll over to see what or where they were. Basically, it's saying, "Hey, you know all that time you spent laying out tables in this diagram... half of them are going to vanish when you view it, and I'm not going to tell you which tables vanished or where they were in the diagram. You're just going to see a bunch of random empty spaces where tables used to be ;)" Ridiculous. So I thought that maybe if I look in the dbo.sysdiagrams table, I could look at some plain text definition of the diagram to get a clue about the names of the tables that went missing (because thier names were probably only changed slightly) or their coordinates in the diagram (because their spatial location would give me a clue as to what they were), so that I could re-add them, but I can't, because it's a binary definition. So, is there some other program I could use to view the existing database diagram that's not going to just drop and forget the missing tables without telling me what they were, or is this information lost and at the mercy of some SSMS-proprietary database diagram format and viewer which refuses to cooperate with me.

    Read the article

  • What Keeps You from Changing Your Public IP Address and Wreaking Havoc on the Internet?

    - by Jason Fitzpatrick
    What exactly is preventing you (or anyone else) from changing their IP address and causing all sorts of headaches for ISPs and other Internet users? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Whitemage is curious about what’s preventing him from wantonly changing his IP address and causing trouble: An interesting question was asked of me and I did not know what to answer. So I’ll ask here. Let’s say I subscribed to an ISP and I’m using cable internet access. The ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let’s say, 60.61.62.75, and messing with another consumer’s internet access? For the sake of this argument, let’s say that this other IP address is also owned by the same ISP. Also, let’s assume that it’s possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gateway, a network address and a broadcast address. So that’s 3 addresses the ISP “loses” to you. That seems very wasteful for dynamically assigned IP addresses, which the majority of customers are. Could they simply be using static arps? ACLs? Other simple mechanisms? Two things to investigate here, why can’t we just go around changing our addresses, and is the assignment process as wasteful as it seems? The Answer SuperUser contributor Moses offers some insight: Cable modems aren’t like your home router (ie. they don’t have a web interface with simple point-and-click buttons that any kid can “hack” into). Cable modems are “looked up” and located by their MAC address by the ISP, and are typically accessed by technicians using proprietary software that only they have access to, that only runs on their servers, and therefore can’t really be stolen. Cable modems also authenticate and cross-check settings with the ISPs servers. The server has to tell the modem whether it’s settings (and location on the cable network) are valid, and simply sets it to what the ISP has it set it for (bandwidth, DHCP allocations, etc). For instance, when you tell your ISP “I would like a static IP, please.”, they allocate one to the modem through their servers, and the modem allows you to use that IP. Same with bandwidth changes, for instance. To do what you are suggesting, you would likely have to break into the servers at the ISP and change what it has set up for your modem. Could they simply be using static arps? ACLs? Other simple mechanisms? Every ISP is different, both in practice and how close they are with the larger network that is providing service to them. Depending on those factors, they could be using a combination of ACL and static ARP. It also depends on the technology in the cable network itself. The ISP I worked for used some form of ACL, but that knowledge was a little beyond my paygrade. I only got to work with the technician’s interface and do routine maintenance and service changes. What keeps me from changing this IP address to, let’s say, 60.61.62.75 and mess with another consumer’s internet access? Given the above, what keeps you from changing your IP to one that your ISP hasn’t specifically given to you is a server that is instructing your modem what it can and can’t do. Even if you somehow broke into the modem, if 60.61.62.75 is already allocated to another customer, then the server will simply tell your modem that it can’t have it. David Schwartz offers some additional insight with a link to a white paper for the really curious: Most modern ISPs (last 13 years or so) will not accept traffic from a customer connection with a source IP address they would not route to that customer were it the destination IP address. This is called “reverse path forwarding”. See BCP 38. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • How Can I Safely Destroy Sensitive Data CDs/DVDs?

    - by Jason Fitzpatrick
    You have a pile of DVDs with sensitive information on them and you need to safely and effectively dispose of them so no data recovery is possible. What’s the most safe and efficient way to get the job done? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader HaLaBi wants to know how he can safely destroy CDs and DVDs with personal data on them: I have old CDs/DVDs which have some backups, these backups have some work and personal files. I always had problems when I needed to physically destroy them to make sure no one will reuse them. Breaking them is dangerous, pieces could fly fast and may cause harm. Scratching them badly is what I always do but it takes long time and I managed to read some of the data in the scratched CDs/DVDs. What’s the way to physically destroy a CD/DVD safely? How should he approach the problem? The Answer SuperUser contributor Journeyman Geek offers a practical solution coupled with a slightly mad-scientist solution: The proper way is to get yourself a shredder that also handles cds – look online for cd shredders. This is the right option if you end up doing this routinely. I don’t do this very often – For small scale destruction I favour a pair of tin snips – they have enough force to cut through a cd, yet are blunt enough to cause small cracks along the sheer line. Kitchen shears with one serrated side work well too. You want to damage the data layer along with shearing along the plastic, and these work magnificently. Do it in a bag, cause this generates sparkly bits. There’s also the fun, and probably dangerous way – find yourself an old microwave, and microwave them. I would suggest doing this in a well ventilated area of course, and not using your mother’s good microwave. There’s a lot of videos of this on YouTube – such as this (who’s done this in a kitchen… and using his mom’s microwave). This results in a very much destroyed cd in every respect. If I was an evil hacker mastermind, this is what I’d do. The other options are better for the rest of us. Another contributor, Keltari, notes that the only safe (and DoD approved) way to dispose of data is total destruction: The answer by Journeyman Geek is good enough for almost everything. But oddly, that common phrase “Good enough for government work” does not apply – depending on which part of the government. It is technically possible to recover data from shredded/broken/etc CDs and DVDs. If you have a microscope handy, put the disc in it and you can see the pits. The disc can be reassembled and the data can be reconstructed — minus the data that was physically destroyed. So why not just pulverize the disc into dust? Or burn it to a crisp? While technically, that would completely eliminate the data, it leaves no record of the disc having existed. And in some places, like DoD and other secure facilities, the data needs to be destroyed, but the disc needs to exist. If there is a security audit, the disc can be pulled to show it has been destroyed. So how can a disc exist, yet be destroyed? Well, the most common method is grinding the disc down to destroy the data, yet keep the label surface of the disc intact. Basically, it’s no different than using sandpaper on the writable side, till the data is gone. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Something for the weekend - Whats the most complex query?

    - by simonsabin
    Whenever I teach about SQL Server performance tuning I try can get across the message that there is no such thing as a table. Does that sound odd, well it isn't, trust me. Rather than tables you need to consider structures. You have 1. Heaps 2. Indexes (b-trees) Some people split indexes in two, clustered and non-clustered, this I feel confuses the situation as people associate clustered indexes with sorting, but don't associate non clustered indexes with sorting, this is wrong. Clustered and non-clustered indexes are the same b-tree structure(and even more so with SQL 2005) with the leaf pages sorted in a linked list according to the keys of the index.. The difference is that non clustered indexes include in their structure either, the clustered key(s), or the row identifier for the row in the table (see http://sqlblog.com/blogs/kalen_delaney/archive/2008/03/16/nonclustered-index-keys.aspx for more details). Beyond that they are the same, they have key columns which are stored on the root and intermediary pages, and included columns which are on the leaf level. The reason this is important is that this is how the optimiser sees the world, this means it can use any of these structures to resolve your query. Even if your query only accesses one table, the optimiser can access multiple structures to get your results. One commonly sees this with a non-clustered index scan and then a key lookup (clustered index seek), but importantly it's not restricted to just using one non-clustered index and the clustered index or heap, and that's the challenge for the weekend. So the challenge for the weekend is to produce the most complex single table query. For those clever bods amongst you that are thinking, great I will just use lots of xquery functions, sorry these are the rules. 1. You have to use a table from AdventureWorks (2005 or 2008) 2. You can add whatever indexes you like, but you must document these 3. You cannot use XQuery, Spatial, HierarchyId, Full Text or any open rowset function. 4. You can only reference your table once, i..e a FROM clause with ONE table and no JOINs 5. No Sub queries. The aim of this is to show how the optimiser can use multiple structures to build the results of a query and to also highlight why the optimiser is doing that. How many structures can you get the optimiser to use? As an example create these two indexes on AdventureWorks2008 create index IX_Person_Person on Person.Person (lastName, FirstName,NameStyle,PersonType) create index IX_Person_Person on Person.Person(BusinessentityId,ModifiedDate)with drop_existing    select lastName, ModifiedDate   from Person.Person  where LastName = 'Smith' You will see that the optimiser has decided to not access the underlying clustered index of the table but to use two indexes above to resolve the query. This highlights how the optimiser considers all storage structures, clustered indexes, non clustered indexes and heaps when trying to resolve a query. So are you up to the challenge for the weekend to produce the most complex single table query? The prize is a pdf version of a popular SQL Server book, or a physical book if you live in the UK.  

    Read the article

  • Automating XNA Performance Testing?

    - by Grofit
    I was wondering what peoples approaches or thoughts were on automating performance testing in XNA. Currently I am looking at only working in 2d, but that poses many areas where performance can be improved with different implementations. An example would be if you had 2 different implementations of spatial partitioning, one may be faster than another but without doing some actual performance testing you wouldn't be able to tell which one for sure (unless you saw the code was blatantly slow in certain parts). You could write a unit test which for a given time frame kept adding/updating/removing entities for both implementations and see how many were made in each timeframe and the higher one would be the faster one (in this given example). Another higher level example would be if you wanted to see how many entities you can have on the screen roughly without going beneath 60fps. The problem with this is to automate it you would need to use the hidden form trick or some other thing to kick off a mock game and purely test which parts you care about and disable everything else. I know that this isnt a simple affair really as even if you can automate the tests, really it is up to a human to interpret if the results are performant enough, but as part of a build step you could have it run these tests and publish the results somewhere for comparison. This way if you go from version 1.1 to 1.2 but have changed a few underlying algorithms you may notice that generally the performance score would have gone up, meaning you have improved your overall performance of the application, and then from 1.2 to 1.3 you may notice that you have then dropped overall performance a bit. So has anyone automated this sort of thing in their projects, and if so how do you measure your performance comparisons at a high level and what frameworks do you use to test? As providing you have written your code so its testable/mockable for most parts you can just use your tests as a mechanism for getting some performance results... === Edit === Just for clarity, I am more interested in the best way to make use of automated tests within XNA to track your performance, not play testing or guessing by manually running your game on a machine. This is completely different to seeing if your game is playable on X hardware, it is more about tracking the change in performance as your game engine/framework changes. As mentioned in one of the comments you could easily test "how many nodes can I insert/remove/update within QuadTreeA within 2 seconds", but you have to physically look at these results every time to see if it has changed, which may be fine and is still better than just relying on playing it to see if you notice any difference between version. However if you were to put an Assert in to notify you of a fail if it goes lower than lets say 5000 in 2 seconds you have a brittle test as it is then contextual to the hardware, not just the implementation. Although that being said these sort of automated tests are only really any use if you are running your tests as some sort of build pipeline i.e: Checkout - Run Unit Tests - Run Integration Tests - Run Performance Tests - Package So then you can easily compare the stats from one build to another on the CI server as a report of some sort, and again this may not mean much to anyone if you are not used to Continuous Integration. The main crux of this question is to see how people manage this between builds and how they find it best to report upon. As I said it can be subjective but as knowledge will be gained from the answers it seems a worthwhile question.

    Read the article

  • SQLAuthority News – Community Service and Public Speaking Engagements

    - by pinaldave
    Today is the last day of the year and I was going over my memories for year 2010. Almost all of them are good and I feel for sure better person in terms of knowledge, nature and overall human being. Looking back at the year, it is very satisfying as I was able to go out in public and help community out at various capacity. Thought, most of the time my contribution was as speaker, many times, I have reached out and helped organized event and worked at any capacity to get the event out. I have taken parts in many TechEds, PASS events, Virtual Tech Days, Various Community Events around the Globe and my contribution is not limited to my country only. Overall – I feel good to be part of this wonderful and supportive community. SQLAuthority News – A Successful Community TechDays at Ahmedabad – December 11, 2010 SQLAuthority News – A Successful Performance Tuning Seminar at Pune – Dec 4-5, 2010 SQL SERVER – A Successful Performance Tuning Seminar – Hyderabad – Nov 27-28, 2010 – Next Pune SQLAuthority News – SQLPASS Nov 8-11, 2010-Seattle – An Alternative Look at Experience SQLAuthority News – Statistics and Best Practices – Virtual Tech Days – Nov 22, 2010 SQLAuthority News – SQL Server Performance Optimizations Seminar – Grand Success – Colombo, Sri Lanka – Oct 4 – 5, 2010 SQL SERVER – Visiting Alma Mater – Delivering Session on Database Performance and Career – Nirma Institute of Technology SQLAuthority News – Feedback Received for Virtual Tech Days Sessions on Spatial Database SQLAuthority News – Community Tech Days, Ahmedabad – July 24, 2010 SQLAuthority News – SQL Data Camp, Chennai, July 17, 2010 – A Huge Success SQLAuthority News – 2 Sessions at TechInsight 2010 – June 29 – July 1, 2010 SQLAuthority News – Author Visit – SQL Server 2008 R2 Launch SQLAuthority News – Professional Development and Community SQLAuthority News – TechEd India – April 12-14, 2010 Bangalore – An Unforgettable Experience – An Opportunity of A Lifetime SQLAuthority News – Speaking Sessions at TechEd India – 3 Sessions – 1 Panel Discussion SQLAuthority News – Meeting with Allen Bailochan Tuladhar – An Unlimited Experience SQLAuthority News – Author Visit Review – TechMela Nepal – March 29-30, 2010 SQLAuthority News – Excellent Event – TechEd Sri Lanka – Feb 8, 2010 SQLAuthority News – Hyderabad Techies February Fever Feb 11, 2010 – Indexing for Performance SQLAuthority News – MUGH – Microsoft User Group Hyderabad – Feb 2, 2010 Session Review SQLAuthority News – Ahmedabad Community Tech Days – Jan 30, 2010 – Huge Success For earlier year’s contribution you can check my webpage over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Technology

    Read the article

  • Silverlight Cream for March 08, 2011 -- #1056

    - by Dave Campbell
    In this Issue: Joost van Schaik, Manas Patnaik, Kevin Hoffman, Jesse Liberty, Deborah Kurata, Dhananjay Kumar, Dennis Delimarsky, Samuel Jack, Peter Kuhn, WindowsPhoneGeek, and Jfo. Above the Fold: Silverlight: "How I let the trees grow" Peter Kuhn WP7: "Simple Windows Phone 7 / Silverlight drag/flick behavior" Joost van Schaik Shoutouts: SilverlightShow has their top 5 from last week posted, plus the ECOContest is ready to be voted on: SilverlightShow for Feb 28 - March 06, 2011 Drew DeVault is a young man involved with the Microsoft Student Insiders. He gave a WP7 presentation at RMTT and has posted his material: Post-Session: Windows Phone 7 @ RMTT Rui Marinho has an app in the ECO Contest called Forest Findr. is based on the BIng Map Control for silverlight and Sql Spatial data, and helps you find Forests and get geolocated pictures and wikipedia information, and has a post up with a bunch of info on it here: Forest Findr. my entry on the SilverlightShow EcoContest From SilverlightCream.com: Simple Windows Phone 7 / Silverlight drag/flick behavior Joost van Schaik has a behavior that makes *anything* draggable and 'flickable' in WP7 ... read the intro, scroll to the bottom to watch the demo, and then grab up the code... cool stuff, Joost! Data Aggregation Using Presentation Model in RIA and Silverlight 4 Manas Patnaik sent me a link to his blog, and it appears he's got lots of Silverlight goodness out there so you'll be hearing more about him. This first post is on the Presentation Model in RIA and Silverlight 4... good discussion, diagrams and code... good job, Manas! WP7 for iPhone and Android Developers - Advanced UI Kevin Hoffman has part 3 of an ambitious 12-part tutorial series up on WP7 development ... this go-around is concentrating on Advanced UI - Panorama/Pivot controls, DataBinding, ObservableCollections, and Converters... whew! Sterling DB on top of Isolated Storage – 2 Jesse Liberty has part 2 of his Sterling series up... this time setting up the database in App.xaml so it can be used for dealing with tombstoning. Silverlight Charting: Formatting the Tick Marks Deborah Kurata's next chart tutorial is all about showing you how to continue to dress up your charts.. this time by formatting the tick marks... if you don't know what that is... check out the first image in the post. Stored Procedure in WCF Data Service Dhananjay Kumar has a very nice tutorial up on using a stored proc with WCF Data Services... I happen to know someone working on just that at this time. If you have this in mind, here's a step-by-step guide to getting it done. Windows Phone 7 – Episode 5 – Pages Dennis Delimarsky has part 5 of his WP7 tutorial series up and is discussing Pages in this 17 minute video. Unpacking Simon Squared: My mini framework-independent animation library Samuel Jack has not only Open-Sourced the WP7 game he built and blogged about, but he's now explaining some of the structure of the game in posts such as this one about the animation library he wrote that his game is built on. How I let the trees grow Peter Kuhn shares with us the code he used for the tree animation in his ECO Contest entry. There's a lot to learn in this post about performance ... the fully-animated tree has about 20K elements... 5K branches and 20K leaves... check it out. WP7 ToastPrompt in depth WindowsPhoneGeek takes a deep dive into the ToastPrompt control in the Coding4fun Toolkit... everything you need to completely use the control including sample code. Beware the loaded event Jfo talks about another frustration point she had with WP7 development, and that is around the use of the loaded event... read these tips from someone that's been there. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Big Data – Operational Databases Supporting Big Data – Key-Value Pair Databases and Document Databases – Day 13 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the Relational Database and NoSQL database in the Big Data Story. In this article we will understand the role of Key-Value Pair Databases and Document Databases Supporting Big Data Story. Now we will see a few of the examples of the operational databases. Relational Databases (Yesterday’s post) NoSQL Databases (Yesterday’s post) Key-Value Pair Databases (This post) Document Databases (This post) Columnar Databases (Tomorrow’s post) Graph Databases (Tomorrow’s post) Spatial Databases (Tomorrow’s post) Key Value Pair Databases Key Value Pair Databases are also known as KVP databases. A key is a field name and attribute, an identifier. The content of that field is its value, the data that is being identified and stored. They have a very simple implementation of NoSQL database concepts. They do not have schema hence they are very flexible as well as scalable. The disadvantages of Key Value Pair (KVP) database are that they do not follow ACID (Atomicity, Consistency, Isolation, Durability) properties. Additionally, it will require data architects to plan for data placement, replication as well as high availability. In KVP databases the data is stored as strings. Here is a simple example of how Key Value Database will look like: Key Value Name Pinal Dave Color Blue Twitter @pinaldave Name Nupur Dave Movie The Hero As the number of users grow in Key Value Pair databases it starts getting difficult to manage the entire database. As there is no specific schema or rules associated with the database, there are chances that database grows exponentially as well. It is very crucial to select the right Key Value Pair Database which offers an additional set of tools to manage the data and provides finer control over various business aspects of the same. Riak Rick is one of the most popular Key Value Database. It is known for its scalability and performance in high volume and velocity database. Additionally, it implements a mechanism for collection key and values which further helps to build manageable system. We will further discuss Riak in future blog posts. Key Value Databases are a good choice for social media, communities, caching layers for connecting other databases. In simpler words, whenever we required flexibility of the data storage keeping scalability in mind – KVP databases are good options to consider. Document Database There are two different kinds of document databases. 1) Full document Content (web pages, word docs etc) and 2) Storing Document Components for storage. The second types of the document database we are talking about over here. They use Javascript Object Notation (JSON) and Binary JSON for the structure of the documents. JSON is very easy to understand language and it is very easy to write for applications. There are two major structures of JSON used for Document Database – 1) Name Value Pairs and 2) Ordered List. MongoDB and CouchDB are two of the most popular Open Source NonRelational Document Database. MongoDB MongoDB databases are called collections. Each collection is build of documents and each document is composed of fields. MongoDB collections can be indexed for optimal performance. MongoDB ecosystem is highly available, supports query services as well as MapReduce. It is often used in high volume content management system. CouchDB CouchDB databases are composed of documents which consists fields and attachments (known as description). It supports ACID properties. The main attraction points of CouchDB are that it will continue to operate even though network connectivity is sketchy. Due to this nature CouchDB prefers local data storage. Document Database is a good choice of the database when users have to generate dynamic reports from elements which are changing very frequently. A good example of document usages is in real time analytics in social networking or content management system. Tomorrow In tomorrow’s blog post we will discuss about various other Operational Databases supporting Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • MySql Connector/NET 6.7.4 GA has been released

    - by fernando
    MySQL Connector/Net 6.7.4, a new version of the all-managed .NET driver for MySQL has been released.  This is the GA, is feature complete. It is recommended for production environments.  It is appropriate for use with MySQL server versions 5.0-5.7.  It is now available in source and binary form from http://dev.mysql.com/downloads/connector/net/#downloads and mirror sites (note that not all mirror sites may be up to date at this point-if you can't find this version on some mirror, please try again later or choose another download site.) The 6.7 version of MySQL Connector/Net brings the following new features: -  WinRT Connector. -  Load Balancing support. -  Entity Framework 5.0 support. -  Memcached support for Innodb Memcached plugin. -  This version also splits the product in two: from now on, starting version 6.7, Connector/NET will include only the former Connector/NET ADO.NET driver, Entity Framework and ASP.NET providers (Core libraries of MySql.Data, MySql.Data.Entity & MySql.Web). While all the former product Visual Studio integration (Design support, Intellisense, Debugger) are available as part of MySql Windows Installer under the name "MySql for Visual Studio".  WinRT Connector  ------------------------------------------- Now you can write MySql data access apps in Windows Runtime (aka Store Apps) using the familiar API of Connector/NET for .NET.  Load Balancing Support  -------------------------------------------  Now you can setup a Replication or Cluster configuration in the backend, and Connector/NET will balance the load of queries among all servers making up the backend topology.  Entity Framework 5.0  -------------------------------------------  Connector/NET is now compatible with EF 5, including special features of EF 5 like spatial types.  Memcached  -------------------------------------------  Just setup Innodb memcached plugin and use Connector/NET new APIs to establish a client to MySql 5.6 server's memcached daemon.  Bug fixes included in this release: - Fix for Entity Framework when inserts data having Identity columns (Oracle bug #16494585). - Fix for Connector/NET cannot read data from a MySql table using UTF-16/UTF-32 (MySql bug #69169, Oracle bug #16776818). - Fix for Malformed query in Entity Framework when eager loading due to multiple projections (MySql bug #67183, Oracle bug #16872852). - Fix for database objects with 'dbo' prefix when using automatic migrations in Entity Framework 5.0 (Oracle bug #16909439). - Fix for bug IIS application pool reset worker process causes website to crash (Oracle bug #16909237, Mysql Bug #67665). - Fix for bug Error in LINQ to Entities query when using Distinct().Count() (MySql Bug #68513, Oracle bug #16950146). - Fix for occasionally return no data when socket connection is slow, interrupted or delayed (MySql bug #69039, Oracle bug #16950212). - Fix for ConstraintException when filling a datatable (MySql bug #65065, Oracle bug #16952323). - Fix for Data Provider is not found after uninstalling Mysql for visual studio (Oracle bug #16973456). - Fix for nested sql generated for LINQ to Entities query with Take and Order by (MySql bug #65723, Oracle bug #16973939). The documentation is available at http://dev.mysql.com/doc/refman/5.7/en/connector-net.html  Enjoy and thanks for the support!  --  Fernando Gonzalez Sanchez | Software Engineer |  Oracle MySQL Windows Experience Team, Connector/NET  Guadalajara | Jalisco | Mexico 

    Read the article

  • Master Data Management for Location Data - Oracle Site Hub

    - by david.butler(at)oracle.com
    Most MDM discussions cover key domains such as customer, supplier, product, service, and reference data. It is usually understood that these domains have complex structures and hundreds if not thousands of attributes that need governing. Location, on the other hand, strikes most people as address data. How hard can that be? But for many industries, locations are complex, and site information is critical to efficient operations and relevant analytics. Retail stores and malls, bank branches, construction sites come to mind. But one of the best industries for illustrating the power of a site mastering application is Oil & Gas.   Oracle's Master Data Management solution for location data is the Oracle Site Hub. It is a location mastering solution that enables organizations to centralize site and location specific information from heterogeneous systems, creating a single view of site information that can be leveraged across all functional departments and analytical systems.   Let's take a look at the location entities the Oracle Site Hub can manage for the Oil & Gas industry: organizations, property, land, buildings, roads, oilfield, service center, inventory site, real estate, facilities, refineries, storage tanks, vendor locations, businesses, assets; project site, area, well, basin, pipelines, critical infrastructure, offshore platform, compressor station, gas station, etc. Any site can be classified into multiple hierarchies, like organizational hierarchy, operational hierarchy, geographic hierarchy, divisional hierarchies and so on. Any site can also be associated to multiple clusters, i.e. collections of sites, and these can be used as a foundation for driving reporting, analysis, organize daily work, etc. Hierarchies can also be used to model entities which are structured or non-structured collections of nodes, like for example routes, pipelines and more. The User Defined Attribute Framework provides the needed infrastructure to add single row attributes groups like well base attributes (well IDs, well type, well structure and key characterizing measures, and more) and well geometry, and multi row attribute groups like well applications, permits, production data, activities, operations, logs, treatments, tests, drills, treatments, and KPIs. Site Hub can also model areas, lands, fields, basins, pools, platforms, eco-zones, and stratigraphic layers as specific sites, tracking their base attributes, aliases, descriptions, subcomponents and more. Midstream entities (pipelines, logistic sites, pump stations) and downstream entities (cylinders, tanks, inventories, meters, partner's sites, routes, facilities, gas stations, and competitor sites) can also be easily modeled, together with their specific attributes and relationships. Site Hub can store any type of unstructured data associated to a site. This could be stored directly or on an external content management solution, like Oracle Universal Content Management. Considering a well, for example, Site Hub can store any relevant associated multimedia file such as: CAD drawings of the well profile, structure and/or parts, engineering documents, contracts, applications, permits, logs, pictures, photos, videos and more. For any site entity, Site Hub can associate all the related assets and equipments at the site, as well as all relationships between sites, between a site and multiple parties, and between a site and any purchasable or sellable item, over time. Items can be equipment, instruments, facilities, services, products, production entities, production facilities (pipelines, batteries, compressor stations, gas plants, meters, separators, etc.), support facilities (rigs, roads, transmission or radio towers, airstrips, etc.), supplier products and services, catalogs, and more. Items can just be associated to sites using standard Site Hub features, or they can be fully mastered by implementing Oracle Product Hub. Site locations (addresses or geographical coordinates) are also managed with out-of-the-box address geo-coding capabilities coupled with Google Maps integration to deliver powerful mapping capabilities and spatial data analysis. Locations can be shared between different sites. Centered on the site location, any site can also have associated areas. Site Hub can master any site location specific information, like for example cadastral, ownership, jurisdictional, geological, seismic and more, and any site-centric area specific information, like for example economical, political, risk, weather, logistic, traffic information and more. Now if anyone ever asks you why locations need MDM, think about how all these Oil & Gas entities and attributes would translate into your business locations. To learn more about Oracle's full MDM solution for the digital oil field, here is a link to Roberto Negro's outstanding whitepaper: Oracle Site Master Data Management for mastering wells and other PPDM entities in a digital oilfield context  

    Read the article

  • Cannot get net 4.5rc to work

    - by ThomasD
    I have installed .net 4.5rc from http://msdn.microsoft.com/en-us/netframework/hh854779.aspx because I would like to use the new spatial features when developing with visual visual web developer 2010 express. But when I want to change the target framework to .net 4.5 in the project properties it is not there. I have checked the directory in C:\Windows\Microsoft.NET\Framework64 and can see that there is no v4.5 folder but the v4.0 directory has been updated with a timestamp corresponding to when I installed v4.5. The version of the v4.0 directory is v4.0.30319. I am running windows 7 on my computer. Any ideas why I cannot find v4.5 ? thanks Thomas *UPDATE Based on the comment below I have found out that I am running 4.5. For others reading this post, .net 4.5 replaces the files in the .net 4.0 directory but without renaming the directory to .net 4.5 (a bit confusing). To check whether your assemblies have been updated check the product version of eg. system.dll (right click - details). According to this post http://www.west-wind.com/weblog/posts/2012/Mar/13/NET-45-is-an-inplace-replacement-for-NET-40 if the product version is above 17000 it is running 4.5.

    Read the article

  • How to speed up marching cubes?

    - by Dan Vinton
    I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate. Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this. I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls? Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option... I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.

    Read the article

  • Calculating bounding grid coordinates to a user click on google maps/google earth

    - by user170304
    Hello, I have a requirement to calculate the centroid or geodesic midpoint of when a user clicks in between the lat/long grid crossing. The crossing forms a square in most parts of GE and sometimes elongated rectangles. This is due to the shape of the earth of course. I'm looking for a valid mathematical formula that would allow a user to click anywhere in between this grid and then an accurate function (in Javascript or server side code) that would take an assumed grid resolution (say 1km intervals for this discussion) and the input coordinates that should return a centroid coordinate within that graticule grid. To clarify please take a look at the attached image to my google group post: http://google-earth-api.googlegroups.com/web/Picture+5.png?gda=h5oFPz8AAAD315KpovipQeBwdfGpmW3ZhBc9PTADwYa-n193hZ6AItFmHuno63c7phcEXYVuRA6ccyFKn-rNKC-d1pM%5FIdV0&gsc=sz6bbAsAAABBKF7YXWYyc4GmXg-QruHj What I need to be able to do is if a user clicks anywhere in this grid square, I need to find the centroid or center point of that grid intersection/square or at least the bounding grid coordinates (that make the square). If we assume that the grid is UTM standard and has a max resolution of 1km (or make this a parameter), I need to detect the four other points nearby and then calculating the centroid is not as difficult. I welcome any feedback you all may have and appreciate it. I don't have a simple way of letting a user click anywhere on the grid and finding the grid bounding coordinates (making a square of 4 coordinates) or the centroid / midpoint of the graticule grid square necessary. One thought is to use assumptions as much as possible using a reference such as UTM coordinate reference. If I assume that the grid is X degrees wide, can we have a pure javascript function take any input coordinate and return for me the bounding graticule coordinates in Decimal Degrees? Another thought I had was to create the grid in a geo-spatial layer to take any input coordinate and return the nearest centroid of the graticule? Does this make sense? Thanks! Omar

    Read the article

  • Use of clone() and appendTo() with draggable - unexpected results with dragging

    - by Matt Gutting
    I'm constructing a UI for a doctor scheduling app. We have several doctors, each of whom can go in one of several locations on a scheduled day (M - F). I have the main day/location grid (table) in the center of the screen. At the left is a table for the doctor names. On loading, each table cell contains a span (outlined) with the doctor name. The doctor name can go in one slot for each day. I didn't want to just put 5 copies of the doctor name, because the doctor might not be available all 5 days of the week. The idea was: Drag the span and drop into the calendar table. On the drag "start" event, clone the span and append it to the table cell. Now there is another span ready to be dropped into the calendar table. One line of code does the work: $(ui.helper).clone(true).prependTo(ui.helper.parent()); This works. But when I move the cloned span, the original one moves in sync - preserving the spatial relationships as I move the clone around (no doubt there's a "position:relative;left=XX;top=YY" inserted somewhere). I'm sure there's a way to do what I'm thinking of, while keeping the two spans independent. But I'm not thinking of one. Does anyone have an idea? Thanks! Matt I posted this identical question to the jQuery forum as well.

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • Is their a definitive list for the differences between the current version of SQL Azure and SQL Serv

    - by Aim Kai
    I am a relative newbie when it comes to SQL Azure!! I was wondering if there was a definitive list somewhere regarding what is and is not supported by SQL Azure in regards to SQL Server 2008? I have had a look through google but I've noticed some of the blog posts are missing things which I have found through my own testing: For example, quite a lot is summarised in this blog entry http://www.keepitsimpleandfast.com/2009/12/main-differences-between-sql-azure-and.html Common Language Runtime (CLR) Database file placement Database mirroring Distributed queries Distributed transactions Filegroup management Global temporary tables Spatial data and indexes SQL Server configuration options SQL Server Service Broker System tables Trace Flags which is a repeat of the MSDN page http://msdn.microsoft.com/en-us/library/ff394115.aspx I've noticed from my own testing that the following seem to have issues when migrating from SQL Server 2008 to the Azure: XML Types (the msdn does mention large custom types - I guess it may include this?? even if the data schema is really small?) Multi-part views I've been using SQL Azure Migration Wizard v3.1.8 to migrate local databases into the cloud. I was wondering if anyone could point to a list or give me any information till when these features are likely to be included in SQL Azure.

    Read the article

  • Render an SSRS report with a Map as an image map without actually having a ReportViewer on the page

    - by Erica Merchant
    I have a report that has a Map with spatial data. Clicking an object on that map sends you to other pages on the site. I have tried a few different ways of displaying the report: If I put a ReportViewer on the actual page, the page sometimes takes 10+ seconds to load, but the report viewer creates a fully operable image map. If I create a ReportViewer in the code behind, I can use the Render method to to format the report as HTML4.0 and get the streamids from which I can extract the image (painfully). This is pretty fast (1-2 seconds), but only gives me an image, no image map. I can get a very similar functionality as the above example by using rendering extensions on the report URL to create an image and then set an image's source to this url. This is the fastest method, but still does not create an image map. So is there a way to create an image map from the report without having to use the ReportViewer? Or a way to substantially speed up the Report Viewer?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15  | Next Page >