Search Results

Search found 7490 results on 300 pages for 'algorithm analysis'.

Page 134/300 | < Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

  • Does your analytic solution tell you what questions to ask?

    - by Manan Goel
    Analytic solutions exist to answer business questions. Conventional wisdom holds that if you can answer business questions quickly and accurately, you can take better business decisions and therefore achieve better business results and outperform the competition. Most business questions are well understood (read structured) so they are relatively easy to ask and answer. Questions like what were the revenues, cost of goods sold, margins, which regions and products outperformed/underperformed are relatively well understood and as a result most analytics solutions are well equipped to answer such questions. Things get really interesting when you are looking for answers but you don’t know what questions to ask in the first place? That’s like an explorer looking to make new discoveries by exploration. An example of this scenario is the Center of Disease Control (CDC) in United States trying to find the vaccine for the latest strand of the swine flu virus. The researchers at CDC may try hundreds of options before finally discovering the vaccine. The exploration process is inherently messy and complex. The process is fraught with false starts, one question or a hunch leading to another and the final result may look entirely different from what was envisioned in the beginning. Speed and flexibility is the key; speed so the hundreds of possible options can be explored quickly and flexibility because almost everything about the problem, solutions and the process is unknown.  Come to think of it, most organizations operate in an increasingly unknown or uncertain environment. Business Leaders have to take decisions based on a largely unknown view of the future. And since the value proposition of analytic solutions is to help the business leaders take better business decisions, for best results, consider adding information exploration and discovery capabilities to your analytic solution. Such exploratory analysis capabilities will help the business leaders perform even better by empowering them to refine their hunches, ask better questions and take better decisions. That’s your analytic system not only answering the questions but also suggesting what questions to ask in the first place. Today, most leading analytic software vendors offer exploratory analysis products as part of their analytic solutions offerings. So, what characteristics should be top of mind while evaluating the various solutions? The answer is quite simply the same characteristics that are essential for exploration and analysis – speed & flexibility. Speed is required because the system inherently has to be agile to handle hundreds of different scenarios with large volumes of data across large user populations. Exploration happens at the speed of thought so make sure that you system is capable of operating at speed of thought. Flexibility is required because the exploration process from start to finish is full of unknowns; unknown questions, answers and hunches. So, make sure that the system is capable of managing and exploring all relevant data – structured or unstructured like databases, enterprise applications, tweets, social media updates, documents, texts, emails etc. and provides flexible Google like user interface to quickly explore all relevant data. Getting Started You can help business leaders become “Decision Masters” by augmenting your analytic solution with information discovery capabilities. For best results make sure that the solution you choose is enterprise class and allows advanced, yet intuitive, exploration and analysis of complex and varied data including structured, semi-structured and unstructured data.  You can learn more about Oracle’s exploratory analysis solutions by clicking here.

    Read the article

  • Setting up a VPN connection to Amazon VPC - routing

    - by Keeno
    I am having some real issues setting up a VPN between out office and AWS VPC. The "tunnels" appear to be up, however I don't know if they are configured correctly. The device I am using is a Netgear VPN Firewall - FVS336GV2 If you see in the attached config downloaded from VPC (#3 Tunnel Interface Configuration), it gives me some "inside" addresses for the tunnel. When setting up the IPsec tunnels do I use the inside tunnel IP's (e.g. 169.254.254.2/30) or do I use my internal network subnet (10.1.1.0/24) I have tried both, when I tried the local network (10.1.1.x) the tracert stops at the router. When I tried with the "inside" ips, the tracert to the amazon VPC (10.0.0.x) goes out over the internet. this all leads me to the next question, for this router, how do I set up stage #4, the static next hop? What are these seemingly random "inside" addresses and where did amazon generate them from? 169.254.254.x seems odd? With a device like this, is the VPN behind the firewall? I have tweaked any IP addresses below so that they are not "real". I am fully aware, this is probably badly worded. Please if there is any further info/screenshots that will help, let me know. Amazon Web Services Virtual Private Cloud IPSec Tunnel #1 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Your Customer Gateway must be configured with a tunnel interface that is associated with the IPSec tunnel. All traffic transmitted to the tunnel interface is encrypted and transmitted to the Virtual Private Gateway. The Customer Gateway and Virtual Private Gateway each have two addresses that relate to this IPSec tunnel. Each contains an outside address, upon which encrypted traffic is exchanged. Each also contain an inside address associated with the tunnel interface. The Customer Gateway outside IP address was provided when the Customer Gateway was created. Changing the IP address requires the creation of a new Customer Gateway. The Customer Gateway inside IP address should be configured on your tunnel interface. Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.42 Inside IP Addresses - Customer Gateway : 169.254.254.2/30 - Virtual Private Gateway : 169.254.254.1/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: To route traffic between your internal network and your VPC, you will need a static route added to your router. Static Route Configuration Options: - Next hop : 169.254.254.1 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. IPSec Tunnel #2 ================================================================================ #1: Internet Key Exchange Configuration Configure the IKE SA as follows - Authentication Method : Pre-Shared Key - Pre-Shared Key : --- - Authentication Algorithm : sha1 - Encryption Algorithm : aes-128-cbc - Lifetime : 28800 seconds - Phase 1 Negotiation Mode : main - Perfect Forward Secrecy : Diffie-Hellman Group 2 #2: IPSec Configuration Configure the IPSec SA as follows: - Protocol : esp - Authentication Algorithm : hmac-sha1-96 - Encryption Algorithm : aes-128-cbc - Lifetime : 3600 seconds - Mode : tunnel - Perfect Forward Secrecy : Diffie-Hellman Group 2 IPSec Dead Peer Detection (DPD) will be enabled on the AWS Endpoint. We recommend configuring DPD on your endpoint as follows: - DPD Interval : 10 - DPD Retries : 3 IPSec ESP (Encapsulating Security Payload) inserts additional headers to transmit packets. These headers require additional space, which reduces the amount of space available to transmit application data. To limit the impact of this behavior, we recommend the following configuration on your Customer Gateway: - TCP MSS Adjustment : 1387 bytes - Clear Don't Fragment Bit : enabled - Fragmentation : Before encryption #3: Tunnel Interface Configuration Outside IP Addresses: - Customer Gateway : 217.33.22.33 - Virtual Private Gateway : 87.222.33.46 Inside IP Addresses - Customer Gateway : 169.254.254.6/30 - Virtual Private Gateway : 169.254.254.5/30 Configure your tunnel to fragment at the optimal size: - Tunnel interface MTU : 1436 bytes #4: Static Routing Configuration: Static Route Configuration Options: - Next hop : 169.254.254.5 You should add static routes towards your internal network on the VGW. The VGW will then send traffic towards your internal network over the tunnels. EDIT #1 After writing this post, I continued to fiddle and something started to work, just not very reliably. The local IPs to use when setting up the tunnels where indeed my network subnets. Which further confuses me over what these "inside" IP addresses are for. The problem is, results are not consistent what so ever. I can "sometimes" ping, I can "sometimes" RDP using the VPN. Sometimes, Tunnel 1 or Tunnel 2 can be up or down. When I came back into work today, Tunnel 1 was down, so I deleted it and re-created it from scratch. Now I cant ping anything, but Amazon AND the router are telling me tunnel 1/2 are fine. I guess the router/vpn hardware I have just isnt up to the job..... EDIT #2 Now Tunnel 1 is up, Tunnel 2 is down (I didn't change any settings) and I can ping/rdp again. EDIT #3 Screenshot of route table that the router has built up. Current state (tunnel 1 still up and going string, 2 is still down and wont re-connect)

    Read the article

  • Puppet's automatically generated certificates failing

    - by gparent
    I am running a default configuration of Puppet on Debian Squeeze 6.0.4. The server's FQDN is master.example.com. The client's FQDN is client.example.com. I am able to contact the puppet master and send a CSR. I sign it using puppetca -sa but the client will still not connect. Date of both machines is within 2 seconds of Tue Apr 3 20:59:00 UTC 2012 as I wrote this sentence. This is what appears in /var/log/syslog: Apr 3 17:03:52 localhost puppet-agent[18653]: Reopening log files Apr 3 17:03:52 localhost puppet-agent[18653]: Starting Puppet client version 2.6.2 Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed Apr 3 17:03:53 localhost puppet-agent[18653]: Using cached catalog Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog; skipping run Here is some interesting output: OpenSSL client test: client:~# openssl s_client -host master.example.com -port 8140 -cert /var/lib/puppet/ssl/certs/client.example.com.pem -key /var/lib/puppet/ssl/private_keys/client.example.com.pem -CAfile /var/lib/puppet/ssl/certs/ca.pem CONNECTED(00000003) depth=1 /CN=Puppet CA: master.example.com verify return:1 depth=0 /CN=master.example.com verify error:num=7:certificate signature failure verify return:1 depth=0 /CN=master.example.com verify return:1 18509:error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt error:s3_pkt.c:1102:SSL alert number 51 18509:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: client:~# master's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/master.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 2 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:28 2012 GMT Not After : Apr 2 20:01:28 2017 GMT Subject: CN=master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a9:c1:f9:4c:cd:0f:68:84:7b:f4:93:16:20:44: 7a:2b:05:8e:57:31:05:8e:9c:c8:08:68:73:71:39: c1:86:6a:59:93:6e:53:aa:43:11:83:5b:2d:8c:7d: 54:05:65:c1:e1:0e:94:4a:f0:86:58:c3:3d:4f:f3: 7d:bd:8e:29:58:a6:36:f4:3e:b2:61:ec:53:b5:38: 8e:84:ac:5f:a3:e3:8c:39:bd:cf:4f:3c:ff:a9:65: 09:66:3c:ba:10:14:69:d5:07:57:06:28:02:37:be: 03:82:fb:90:8b:7d:b3:a5:33:7b:9b:3a:42:51:12: b3:ac:dd:d5:58:69:a9:8a:ed Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 7b:2c:4f:c2:76:38:ab:03:7f:c6:54:d9:78:1d:ab:6c:45:ab: 47:02:c7:fd:45:4e:ab:b5:b6:d9:a7:df:44:72:55:0c:a5:d0: 86:58:14:ae:5f:6f:ea:87:4d:78:e4:39:4d:20:7e:3d:6d:e9: e2:5e:d7:c9:3c:27:43:a4:29:44:85:a1:63:df:2f:55:a9:6a: 72:46:d8:fb:c7:cc:ca:43:e7:e1:2c:fe:55:2a:0d:17:76:d4: e5:49:8b:85:9f:fa:0e:f6:cc:e8:28:3e:8b:47:b0:e1:02:f0: 3d:73:3e:99:65:3b:91:32:c5:ce:e4:86:21:b2:e0:b4:15:b5: 22:63 root@master:/etc/puppet# CA's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/ca.pem Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:05 2012 GMT Not After : Apr 2 20:01:05 2017 GMT Subject: CN=Puppet CA: master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:b5:2c:3e:26:a3:ae:43:b8:ed:1e:ef:4d:a1:1e: 82:77:78:c2:98:3f:e2:e0:05:57:f0:8d:80:09:36: 62:be:6c:1a:21:43:59:1d:e9:b9:4d:e0:9c:fa:09: aa:12:a1:82:58:fc:47:31:ed:ad:ad:73:01:26:97: ef:d2:d6:41:6b:85:3b:af:70:00:b9:63:e9:1b:c3: ce:57:6d:95:0e:a6:d2:64:bd:1f:2c:1f:5c:26:8e: 02:fd:d3:28:9e:e9:8f:bc:46:bb:dd:25:db:39:57: 81:ed:e5:c8:1f:3d:ca:39:cf:e7:f3:63:75:f6:15: 1f:d4:71:56:ed:84:50:fb:5d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:TRUE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Certificate Sign, CRL Sign X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C Signature Algorithm: sha1WithRSAEncryption 1d:cd:c6:65:32:42:a5:01:62:46:87:10:da:74:7e:8b:c8:c9: 86:32:9e:c2:2e:c1:fd:00:79:f0:ef:d8:73:dd:7e:1b:1a:3f: cc:64:da:a3:38:ad:49:4e:c8:4d:e3:09:ba:bc:66:f2:6f:63: 9a:48:19:2d:27:5b:1d:2a:69:bf:4f:f4:e0:67:5e:66:84:30: e5:85:f4:49:6e:d0:92:ae:66:77:50:cf:45:c0:29:b2:64:87: 12:09:d3:10:4d:91:b6:f3:63:c4:26:b3:fa:94:2b:96:18:1f: 9b:a9:53:74:de:9c:73:a4:3a:8d:bf:fa:9c:c0:42:9d:78:49: 4d:70 root@master:/etc/puppet# Client's certificate: client:~# openssl x509 -text -noout -in /var/lib/puppet/ssl/certs/client.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3 (0x3) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:36 2012 GMT Not After : Apr 2 20:01:36 2017 GMT Subject: CN=client.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:ae:88:6d:9b:e3:b1:fc:47:07:d6:bf:ea:53:d1: 14:14:9b:35:e6:70:43:e0:58:35:76:ac:c5:9d:86: 02:fd:77:28:fc:93:34:65:9d:dd:0b:ea:21:14:4d: 8a:95:2e:28:c9:a5:8d:a2:2c:0e:1c:a0:4c:fa:03: e5:aa:d3:97:98:05:59:3c:82:a9:7c:0e:e9:df:fd: 48:81:dc:33:dc:88:e9:09:e4:19:d6:e4:7b:92:33: 31:73:e4:f2:9c:42:75:b2:e1:9f:d9:49:8c:a7:eb: fa:7d:cb:62:22:90:1c:37:3a:40:95:a7:a0:3b:ad: 8e:12:7c:6e:ad:04:94:ed:47 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 33:1f:ec:3c:91:5a:eb:c6:03:5f:a1:58:60:c3:41:ed:1f:fe: cb:b2:40:11:63:4d:ba:18:8a:8b:62:ba:ab:61:f5:a0:6c:0e: 8a:20:56:7b:10:a1:f9:1d:51:49:af:70:3a:05:f9:27:4a:25: d4:e6:88:26:f7:26:e0:20:30:2a:20:1d:c4:d3:26:f1:99:cf: 47:2e:73:90:bd:9c:88:bf:67:9e:dd:7c:0e:3a:86:6b:0b:8d: 39:0f:db:66:c0:b6:20:c3:34:84:0e:d8:3b:fc:1c:a8:6c:6c: b1:19:76:65:e6:22:3c:bf:ff:1c:74:bb:62:a0:46:02:95:fa: 83:41 client:~#

    Read the article

  • Circular Shifts on Strings in Bash

    - by Kyle Van Koevering
    I have a homework assignment where I need to take input from a file and continuously remove the first word in a line and append it to the end of the line until all combinations have been done. I really don't know where to begin and would be thankful for any sort of direction. The part that has me confused is that this is suppose to be performed without the use of arrays. I'm not just fishing for someone to solve the problem for me, I'm just looking for some direction. Thank you very much for your time and help. SAMPlE INPUT: Pipes and Filters Java Swing Software Requirements Analysis SAMPLE OUTPUT: Analysis Software Requirements Filters Pipes and Java Swing Pipes and Filters Requirements Analysis Software Software Requirements Analysis Swing Java

    Read the article

  • Visual Studio hangs when deploying a cube

    - by Richie
    Hello All, I'm having an issue with an Analysis Services project in Visual Studio 2005. My project always builds but only occasionally deploys. No errors are reported and VS just hangs. This is my first Analysis Services project so I am hoping that there is something obvious that I am just missing. Here is the situation I have a cube that I have successfully deployed. I then make some change, e.g., adding a hierarchy to a dimension. When I try to deploy again VS hangs. I have to restart Analysis Services to regain control of VS so I can shut it down. I restart everything sometimes once, sometimes twice or more before the project will eventually deploy. This happens with any change I make there seems to be no pattern to this behaviour. Sometimes I have to delete the cube from Analysis Services before restarting everything to get a successful deploy. Also I have successfully deployed the cube, and then subsequently successfully reprocessed a dimension then when I open a query window in SQL Server Management Studio it says that it can find any cubes. As a test I have deployed a cube successfully. I have then deleted it in Analysis Services and attempted to redeploy it, without making any changes to the cube, only to have the same behaviour mentioned above. VS just hangs with no reason so I have no idea where to start hunting down the problem. It is taking 15-20 minutes to make a change as simple as setting the NameColumn of a dimension attribute. As you can imagine this is taking hours of my time so I would greatly appreciate any assistance anyone can give me.

    Read the article

  • SQL Server 2008 and SP1

    - by andrew007
    Hi, I have a server where I installed SQL Server 2008 and after I applied the SP1. Now, I want also to add the Analysis Services to this instance by using the "Add or remove features...". My questions are: Is it possible to add the Analysis Services on a server with SP1 already installed? How can I apply the SP1 also to the new Analysis Service feature? THANKS!

    Read the article

  • Depth interpolation for z-buffer, with scanline

    - by Twodordan
    I have to write my own software 3d rasterizer, and so far I am able to project my 3d model made of triangles into 2d space: I rotate, translate and project my points to get a 2d space representation of each triangle. Then, I take the 3 triangle points and I implement the scanline algorithm (using linear interpolation) to find all points[x][y] along the edges(left and right) of the triangles, so that I can scan the triangle horizontally, row by row, and fill it with pixels. This works. Except I have to also implement z-buffering. This means that knowing the rotated&translated z coordinates of the 3 vertices of the triangle, I must interpolate the z coordinate for all other points I find with my scanline algorithm. The concept seems clear enough, I first find Za and Zb with these calculations: var Z_Slope = (bottom_point_z - top_point_z) / (bottom_point_y - top_point_y); var Za = top_point_z + ((current_point_y - top_point_y) * Z_Slope); Then for each Zp I do the same interpolation horizontally: var Z_Slope = (right_z - left_z) / (right_x - left_x); var Zp = left_z + ((current_point_x - left_x) * Z_Slope); And of course I add to the zBuffer, if current z is closer to the viewer than the previous value at that index. (my coordinate system is x: left - right; y: top - bottom; z: your face - computer screen;) The problem is, it goes haywire. The project is here and if you select the "Z-Buffered" radio button, you'll see the results... (note that the rest of the options before "Z-Buffered" use the Painter's algorithm to correctly order the triangles. I also use the painter's algorithm -only- to draw the wireframe in "Z-Buffered" mode for debugging purposes) PS: I've read here that you must turn the z's into their reciprocals (meaning z = 1/z) before you interpolate. I tried that, and it appears that there's no change. What am I missing? (could anyone clarify, precisely where you must turn z into 1/z and where to turn it back?)

    Read the article

  • Do I need path finding to make AI avoid obstacles?

    - by yannicuLar
    How do you know when a path-finding algorithm is really needed? There are contexts, where you just want to improve AI navigation to avoid an object, like a space -ship that won't crash on a planet or a car that already knows where to steer, but needs small corrections to avoid a road bump. As I've seen on similar posts, the obvious solution is to implement some path-finding algorithm, most likely like A*, and let your AI-controlled object to navigate through the path. Now, I have the necessary skills to implement a path-finding algorithm, and I'm not being lazy here, but I'm still a bit skeptical on if this is really needed. I have the impression that path-finding is appropriate to navigate through a maze, or picking a path when there are many alternatives. But in obstacle avoidance, when you do know the path, but need to make slight corrections, is path finding really necessary? Even when the obstacles are too sparse or small ? I mean, in real life, when you're driving and notice a bump on the road, you will just have to pick between steering a bit on the left (and have the bump on your right side) or the other way around. You will not consider stopping, or going backwards. A path finding would be appropriate when you need to pick a route through the city, right ? So, are there any other methods to help AI navigation, except path-finding? And if there are, how do you know when path-fining is the appropriate algorithm ? Thanks for any thoughts

    Read the article

  • A Cautionary Tale About Multi-Source JNDI Configuration

    - by scott.s.nelson(at)oracle.com
    Here's a bit of fun with WebLogic JDBC configurations.  I ran into this issue after reading that p13nDataSource and cgDataSource-NonXA should not be configured as multi-source. There were some issues changing them to use the basic JDBC connection string and when rolling back to the bad configuration the server went "Boom".  Since one purpose behind this blog is to share lessons learned, I just had to post this. If you write your descriptors manually (as opposed to generating them using the WLS console) and put a comma-separated list of JNDI addresses like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool,contentDataSource, contentVersioningDataSource,portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0,portalDataSource-rac1</data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params> so long as the first address resolves, it will still work. Sort of.  If you call this connection to do an update, only one node of the RAC instance is updated. Other wonderful side-effects include the server refusing to start sometimes. The proper way to list the JNDI sources is one per node, like this: <jdbc-data-source-params> <jndi-name>weblogic.jdbc.jts.commercePool</jndi-name> <jndi-name>contentDataSource</jndi-name> <jndi-name>contentVersioningDataSource</jndi-name> <jndi-name>portalFrameworkPool</jndi-name> <algorithm-type>Load-Balancing</algorithm-type> <data-source-list>portalDataSource-rac0, portalDataSource-rac1, portalDataSource-rac2 </data-source-list> <failover-request-if-busy>false</failover-request-if-busy> </jdbc-data-source-params>(Props to Sandeep Seshan for locating the root cause)

    Read the article

  • A new mission statement for my school's algorithms class

    - by Eric Fode
    The teacher at Eastern Washington University that is now teaching the algorithms course is new to eastern and as a result the course has changed drastically mostly in the right direction. That being said I feel that the class could use a more specific, and industry oriented (since that is where most students will go, though suggestions for an academia oriented class are also welcome) direction, having only worked in industry for 2 years I would like the community's (a wider and much more collectively experienced and in the end plausibly more credible) opinion on the quality of this as a statement for the purpose an algorithms class, and if I am completely off target your suggestion for the purpose of a required Jr. level Algorithms class that is standalone (so no other classes focusing specifically on algorithms are required). The statement is as follows: The purpose of the algorithms class is to do three things: Primarily, to teach how to learn, do basic analysis, and implement a given algorithm found outside of the class. Secondly, to teach the student how to model a problem in their mind so that they can find a an existing algorithm or have a direction to start the development of a new algorithm. Third, to overview a variety of algorithms that exist and to deeply understand and analyze one algorithm in each of the basic algorithmic design strategies: Divide and Conquer, Reduce and Conquer, Transform and Conquer, Greedy, Brute Force, Iterative Improvement and Dynamic Programming. The Question in short is: do you agree with this statement of the purpose of an algorithms course, so that it would be useful in the real world, if not what would you suggest?

    Read the article

  • Find points whose pairwise distances approximate a given distance matrix

    - by Stephan Kolassa
    Problem. I have a symmetric distance matrix with entries between zero and one, like this one: D = ( 0.0 0.4 0.0 0.5 ) ( 0.4 0.0 0.2 1.0 ) ( 0.0 0.2 0.0 0.7 ) ( 0.5 1.0 0.7 0.0 ) I would like to find points in the plane that have (approximately) the pairwise distances given in D. I understand that this will usually not be possible with strictly correct distances, so I would be happy with a "good" approximation. My matrices are smallish, no more than 10x10, so performance is not an issue. Question. Does anyone know of an algorithm to do this? Background. I have sets of probability densities between which I calculate Hellinger distances, which I would like to visualize as above. Each set contains no more than 10 densities (see above), but I have a couple of hundred sets. What I did so far. I did consider posting at math.SE, but looking at what gets tagged as "geometry" there, it seems like this kind of computational geometry question would be more on-topic here. If the community thinks this should be migrated, please go ahead. This looks like a straightforward problem in computational geometry, and I would assume that anyone involved in clustering might be interested in such a visualization, but I haven't been able to google anything. One simple approach would be to randomly plonk down points and perturb them until the distance matrix is close to D, e.g., using Simulated Annealing, or run a Genetic Algorithm. I have to admit that I haven't tried that yet, hoping for a smarter way. One specific operationalization of a "good" approximation in the sense above is Problem 4 in the Open Problems section here, with k=2. Now, while finding an algorithm that is guaranteed to find the minimum l1-distance between D and the resulting distance matrix may be an open question, it still seems possible that there at least is some approximation to this optimal solution. If I don't get an answer here, I'll mail the gentleman who posed that problem and ask whether he knows of any approximation algorithm (and post any answer I get to that here).

    Read the article

  • An XEvent a Day (8 of 31) – Targets Week – synchronous_event_counter

    - by Jonathan Kehayias
    Yesterday’s post, Targets Week - Bucketizers , looked at the bucketizer Targets in Extended Events and how they can be used to simplify analysis and perform more targeted analysis based on their output.  Today’s post will be fairly short, by comparison to the previous posts, while we look at the synchronous_event_counter target, which can be used to test the impact of an Event Session without actually incurring the cost of Event collection. What is the synchronous_event_counter? The synchronous_event_count...(read more)

    Read the article

  • SQL SERVER – Maximize Database Performance with DB Optimizer – SQL in Sixty Seconds #054

    - by Pinal Dave
    Performance tuning is an interesting concept and everybody evaluates it differently. Every developer and DBA have different opinion about how one can do performance tuning. I personally believe performance tuning is a three step process Understanding the Query Identifying the Bottleneck Implementing the Fix While, we are working with large database application and it suddenly starts to slow down. We are all under stress about how we can get back the database back to normal speed. Most of the time we do not have enough time to do deep analysis of what is going wrong as well what will fix the problem. Our primary goal at that time is to just fix the database problem as fast as we can. However, here is one very important thing which we need to keep in our mind is that when we do quick fix, it should not create any further issue with other parts of the system. When time is essence and we want to do deep analysis of our system to give us the best solution we often tend to make mistakes. Sometimes we make mistakes as we do not have proper time to analysis the entire system. Here is what I do when I face such a situation – I take the help of DB Optimizer. It is a fantastic tool and does superlative performance tuning of the system. Everytime when I talk about performance tuning tool, the initial reaction of the people is that they do not want to try this as they believe it requires lots of the learning of the tool before they use it. It is absolutely not true with the case of the DB optimizer. It is a very easy to use and self intuitive tool. Once can get going with the product, in no time. Here is a quick video I have build where I demonstrate how we can identify what index is missing for query and how we can quickly create the index. Entire three steps of the query tuning are completed in less than 60 seconds. If you are into performance tuning and query optimization you should download DB Optimizer and give it a go. Let us see the same concept in following SQL in Sixty Seconds Video: You can Download DB Optimizer and reproduce the same Sixty Seconds experience. Related Tips in SQL in Sixty Seconds: Performance Tuning – Part 1 of 2 – Getting Started and Configuration Performance Tuning – Part 2 of 2 – Analysis, Detection, Tuning and Optimizing What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Interview Questions and Answers, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Identity

    Read the article

  • BigData and Customer Experience: Happy Together

    - by Isabel F. Peñuelas
    The two big buzzes of the year may lay closer than it appears. Both concepts intersect at various points: BigData and Return of Investment of Marketing Campaigns On a recent post Big Data Is The Future Of Marketing Jeff Dachis explains very clearly how “Big data analytics finally allows marketers to identify, measure, and manage what is positively impacting their Brand”. Regression analysis applied to big data volumes coming from social media will substitute the failed attempts to justify marketing investments on social media in terms of followers and likes, he continues, “the measurement models applied by marketers on TV Campaigns don´t work on social”, we need to study the data with fresh eyes and maybe then we will start understanding and measuring brand engagemet. Social CRM and BigData The real value of Social CRM start by analyzing mass of big data from social media in order of applying social intelligence techniques that allow us to classify new customer niches and communities and define appropriated strategies to contact potential customers. Gartner Says that the Market for Social CRM is on pace to surpass $1 Billion in Revenue by Year-End 2012 but in words of Zach Hofer-Shall, Analyst at Forrester Research “Social customer relationship management is hard” (The Social CRM Arms Race Heats ). To succeed brands need three things: Investing in new social tools, investing in consultancy and investing in infrastructure for massive data storage and analysis. Neither CeX or BigData are easy and cheap wins. But what are the customer benefits of such investments? Big Data and Brand Engagement Time is the most valuable asset of todays consumers: tired of information overload, exhausted by the terabytes of offering, anxious because of not having the same fast multichannel experience with their services’ marketers or preferred goods providers than the one they found on their social media. Yes, I know you have read this before- me too. But is real. The motto of the Customer Experience philosophy of providing a consistent experience through multiple touchpoints that makes the relationship customer/brand easier and valuable finds it basis on understanding customer/s preferences and context for which BigData analysis is another imperative. In summary, I believe that using BigData Analysis in combination with appropriated CeX strategies and technologies is a promising direction for achieving: efficiency and marketing cost-savings; growing the customer base; and increasing customer conversion and retention. In a world: The Direction of Future Marketing.

    Read the article

  • MySQL and Hadoop Integration - Unlocking New Insight

    - by Mat Keep
    “Big Data” offers the potential for organizations to revolutionize their operations. With the volume of business data doubling every 1.2 years, analysts and business users are discovering very real benefits when integrating and analyzing data from multiple sources, enabling deeper insight into their customers, partners, and business processes. As the world’s most popular open source database, and the most deployed database in the web and cloud, MySQL is a key component of many big data platforms, with Hadoop vendors estimating 80% of deployments are integrated with MySQL. The new Guide to MySQL and Hadoop presents the tools enabling integration between the two data platforms, supporting the data lifecycle from acquisition and organisation to analysis and visualisation / decision, as shown in the figure below The Guide details each of these stages and the technologies supporting them: Acquire: Through new NoSQL APIs, MySQL is able to ingest high volume, high velocity data, without sacrificing ACID guarantees, thereby ensuring data quality. Real-time analytics can also be run against newly acquired data, enabling immediate business insight, before data is loaded into Hadoop. In addition, sensitive data can be pre-processed, for example healthcare or financial services records can be anonymized, before transfer to Hadoop. Organize: Data is transferred from MySQL tables to Hadoop using Apache Sqoop. With the MySQL Binlog (Binary Log) API, users can also invoke real-time change data capture processes to stream updates to HDFS. Analyze: Multi-structured data ingested from multiple sources is consolidated and processed within the Hadoop platform. Decide: The results of the analysis are loaded back to MySQL via Apache Sqoop where they inform real-time operational processes or provide source data for BI analytics tools. So how are companies taking advantage of this today? As an example, on-line retailers can use big data from their web properties to better understand site visitors’ activities, such as paths through the site, pages viewed, and comments posted. This knowledge can be combined with user profiles and purchasing history to gain a better understanding of customers, and the delivery of highly targeted offers. Of course, it is not just in the web that big data can make a difference. Every business activity can benefit, with other common use cases including: - Sentiment analysis; - Marketing campaign analysis; - Customer churn modeling; - Fraud detection; - Research and Development; - Risk Modeling; - And more. As the guide discusses, Big Data is promising a significant transformation of the way organizations leverage data to run their businesses. MySQL can be seamlessly integrated within a Big Data lifecycle, enabling the unification of multi-structured data into common data platforms, taking advantage of all new data sources and yielding more insight than was ever previously imaginable. Download the guide to MySQL and Hadoop integration to learn more. I'd also be interested in hearing about how you are integrating MySQL with Hadoop today, and your requirements for the future, so please use the comments on this blog to share your insights.

    Read the article

  • #MDX in London and speculation about future books

    - by Marco Russo (SQLBI)
    Chris Webb, who wrote the Expert Cube Development with Microsoft SQL Server 2008 Analysis Services book with me and Alberto , is preparing another Introduction to MDX course in London, this time from October 26th to 28th. It is now a three day course (previously it was two day) and you can find every other detail here . You might be wondering whether we are writing something else... well, we don't have plan to release a new edition of the Analysis Services book - after all, all the content of the...(read more)

    Read the article

  • Understanding Data Science: Recent Studies

    - by Joe Lamantia
    If you need such a deeper understanding of data science than Drew Conway's popular venn diagram model, or Josh Wills' tongue in cheek characterization, "Data Scientist (n.): Person who is better at statistics than any software engineer and better at software engineering than any statistician." two relatively recent studies are worth reading.   'Analyzing the Analyzers,' an O'Reilly e-book by Harlan Harris, Sean Patrick Murphy, and Marck Vaisman, suggests four distinct types of data scientists -- effectively personas, in a design sense -- based on analysis of self-identified skills among practitioners.  The scenario format dramatizes the different personas, making what could be a dry statistical readout of survey data more engaging.  The survey-only nature of the data,  the restriction of scope to just skills, and the suggested models of skill-profiles makes this feel like the sort of exercise that data scientists undertake as an every day task; collecting data, analyzing it using a mix of statistical techniques, and sharing the model that emerges from the data mining exercise.  That's not an indictment, simply an observation about the consistent feel of the effort as a product of data scientists, about data science.  And the paper 'Enterprise Data Analysis and Visualization: An Interview Study' by researchers Sean Kandel, Andreas Paepcke, Joseph Hellerstein, and Jeffery Heer considers data science within the larger context of industrial data analysis, examining analytical workflows, skills, and the challenges common to enterprise analysis efforts, and identifying three archetypes of data scientist.  As an interview-based study, the data the researchers collected is richer, and there's correspondingly greater depth in the synthesis.  The scope of the study included a broader set of roles than data scientist (enterprise analysts) and involved questions of workflow and organizational context for analytical efforts in general.  I'd suggest this is useful as a primer on analytical work and workers in enterprise settings for those who need a baseline understanding; it also offers some genuinely interesting nuggets for those already familiar with discovery work. We've undertaken a considerable amount of research into discovery, analytical work/ers, and data science over the past three years -- part of our programmatic approach to laying a foundation for product strategy and highlighting innovation opportunities -- and both studies complement and confirm much of the direct research into data science that we conducted. There were a few important differences in our findings, which I'll share and discuss in upcoming posts.

    Read the article

  • Using The Data Mining Query Task in SSIS

    SQL Server Integration Services (SSIS) is a Business Intelligence tool which can be used by database developers or administrators to perform Extract, Transform & Load (ETL) operations. In my previous article Using Analysis Services Processing Task & Analysis Services ... [Read Full Article]

    Read the article

  • WildPackets Monitors Diverse Networks

    WildPackets offers portable network analysis products which are designed for use on enterprise networks and in test and measurement labs, plus distributed network analysis solutions for enterprise-wide applications.

    Read the article

  • Parent-child hierarchies and unary operators in PowerPivot

    - by Marco Russo (SQLBI)
    Alberto wrote an excellent post describing how to implement the Unary Operator feature (which is present in Analysis Services) in PowerPivot (there was a previous post about parent-child hierarchies, too). I have to say that the solution is not so easy to implement as in Analysis Services, but it just works and, from a practical point of view, it is not so difficult to implement if you understand how it works and accept its limitations (only sum and subtractions are supported). I think that many...(read more)

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Search Engine Optimizing

    Search Engine Optimization is a process by which a web site is improved so that it can be more easily found by search engines, rank higher and be found by its target audience. The main components to SEO are: keyword analysis, content analysis, title and meta tags, relevant link building, search engine submission, and maintenance. Below are steps in the process.

    Read the article

  • Botnet Malware Sleeps Eight Months Activation, Child Concerns

    Daily Safety Check experts used a computer forensic analysis of a significant botnet that consisted of Carberp and SpyEye malware to come up with the details for their report. The analysis found that the botnet profiled the behavior of the slave computers it infected, similar to surveillance techniques used by law enforcement agencies, for an average of eight months. During the eight months, the botnet analyzed each computer's users and assigned ratings to certain activities to form a complete profile for each. Doing so allowed those behind the scheme to determine which were the most favora...

    Read the article

< Previous Page | 130 131 132 133 134 135 136 137 138 139 140 141  | Next Page >