Search Results

Search found 1580 results on 64 pages for 'scheme'.

Page 56/64 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

  • nginx, php-cgi and "No input file specified."

    - by Stephen Belanger
    I'm trying to get nginx to play nice with php-cgi, but it's not quite working how I'd like. I'm using some set variables to allow for dynamic host names--basically anything.local. I know that stuff is working because I can access static files properly, however php files don't work. I get the standard "No input file specified." error which normally occurs when the file doesn't exist, but it definitely does exist and the path is correct because I can access the static files in the same path. It could possibly be a permissions thing, but I'm not sure how that could be an issue. I'm running this on Windows under my own user account, so I think it should have permission unless php-cgi is running under a different user without me telling it to. . Here's my config; worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; server { # Listen for HTTP listen 80; # Match to local host names. server_name *.local; # We need to store a "cleaned" host. set $no_www $host; set $no_local $host; # Strip out www. if ($host ~* www\.(.*)) { set $no_www $1; rewrite ^(.*)$ $scheme://$no_www$1 permanent; } # Strip local for directory names. if ($no_www ~* (.*)\.local) { set $no_local $1; } # Define default path handler. location / { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; index index.php index.html index.htm; # Route non-existent paths through Kohana system router. try_files $uri $uri/ /index.php?kohana_uri=$request_uri; } # pass PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root ../Users/Stephen/Documents/Work/$no_local.com/hosts/main/docs; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # Prevent access to system files. location ~ /\. { return 404; } location ~* ^/(modules|application|system) { return 404; } } }

    Read the article

  • Intermittent Disconnection of Client Computers from Domain Server

    - by dilip nagle
    The Background: I have Windows 2008 server Enterprise Version with 25 user cal licences. It has a domain and all users and a network shared HP printer in it. The Server has two network cards and both these cards as well as all client machines are on IP addressing scheme of 192.168.1.* with subnetmask 255.255.255.0. Of the two network cards viz. 192.168.1.231 and 192.168.1.233, only 192.168.1.231 is registered with DNS. In 192.168.1.233(i.e. 2nd network card) has default getway as 192.168.1.231 and dns address as 192.168.1.231. The Server has three hard disks with capacities as 500gb, 500gb and 1TB and are partitioned as (C,D,E), (F,G) and (K) with partition K having all user data into various Shared Folders. Each of these folders(On Partition K), are mapped onto each user's computer as per the right of access given to them. The Problem: The Server was installed about 6 months ago and till date not even once, the Server has Hung or has given any problem. All the Clients computers are able to run the web based software from their computers via ip address, e.g. http://192.168.1.231/webERP/default.aspx. However, occassionally, when any client computer tries to browse network mappings, it hangs. Again, there is no fixed pattern. This may happen after running smoothly for say 3 days. On each Client's machine, the network settings are as follows: IP Address: 192.168.1.* where * is 1,2,3 .... Sunnetmask: 255.255.255.0 defauly getway: 192.168.1.231 Which is a server card and DNS address. preferred DNS Server: 192.168.1.231 In Advanced Tab under Wins: LMHostLookup is Unticked and default is radio buttoned. Ideally, I would have loved to have Disabled NETBIOS over TCP/IP but some network printers do not get accessed if this option is enabled(ie. Radio Buttoned). Bacause Disabling Netbios will drastically reduce traffic of NETBIOS broadcasting to all the computers on the net to do naming resolution. On Server, I have WINs Running which I have Scavanged Records, verified Database Integrity etc, removed Tombstoned Records etc. The Critical Errors shown only once a day when the server is statred are 4224(WINS) and 12923 - Server Licencing failed to Update DNS Record. I fail to understand as why do client machines HANG when they try to browse mapped network shared folders on K Drive. Kindly Advice

    Read the article

  • networking tunnel adapter connections?

    - by Karthik Balaguru
    I understand that Tunnel Adapter LAN is for encapsulating IPv6 packets with an IPv4 header so that they can be sent across an IPv4 network. Few queries popped up in my mind based on this :- If i do 'ipconfig', Apart from ethernet adapter LAN details, I get a series of statments as below - Tunnel adapter Local Area Connection* 6 Tunnel adapter Local Area Connection* 7 Tunnel adapter Local Area Connection* 12 Tunnel adapter Local Area Connection* 13 Tunnel adapter Local Area Connection* 14 Tunnel adapter Local Area Connection* 15 Tunnel adapter Local Area Connection* 16 Except for the *16, all the other Tunnel Adapter Local Area Connections show Media Disconnected. Why is the numbering for the Tunnel adapter LAN not sequential? It is like 6, 7, 12, 13, 14, 15, 16. A strange numbering scheme! I tried to figure it out by thinking of some arithmetic series. But, it does not seem to fit in. There is a huge gap between 7 and 12. Any ideas? What is the need for so many Tunnel Adapter LAN connections? Can you tell me a scenario that requires all of those ? I did ipconfig /all to get more information. From the listing, I understand that: 16, 15, 14, 12 are Microsoft 6to4 Adapters 13, 6 are isatap Adapters 7 is Teredo Tunneling Pseudo-interface I understand that the above are for automatic tunneling so that the tunnel endpoints are determined automatically by the routing infrastructure. 6to4 is recommended by RFC3056 for automatic tunneling that uses protocol 41 for encapsulation. It is typically used when an end-user wants to connect to the IPv6 Internet using their existing IPv4 connection. Teredo is an automatic tunneling technique that uses UDP encapsulation across multiple NATs. That is, It is to grant IPv6 connectivity to nodes that are located behind IPv6-unaware NAT devices ISATAP treats the IPv4 network as a virtual IPv6 local link, with mappings from each IPv4 address to a link-local IPv6 address. That is to transmit IPv6 packets between dual-stack nodes on top of an IPv4 network. That is, to put in simple words, ISATAP is an intra-site mechanism, while the 6to4 and Teredo are for inter-site tunnelling mechanisms. It seems that Teredo should alone enabled by default in Vista, But my system does not show it to be enabled by default. Interestingly, it shows a 6to4 tunnel adapter (Tunnel adapter LAN connection 16) to be enabled by default? Any specific reasons for it? If i do ipconfig /all, why is only one Teredo present while four 6to4 are present ? I searched the internet for answers to the above queries, but I am unable to find clear answers.

    Read the article

  • HP MSR 30-10a Router - Route Traffic over Default Route

    - by SteadH
    We have a brand new HP MSR 30-10a Router. We have a fairly simple routing situation - we have two IP blocks, one which has a route out. We need things on the first block to go through the router, and out. I have an old Cisco 2801 router doing the job right now. For our example - IP Block 1: 50.203.110.232/29, Router interface on this block is 50.203.110.237, route out is 50.203.110.233. IP Block 2: 50.202.219.1/27, Router interface on this block at 50.202.219.20. I have a static route created for: 0.0.0.0 0.0.0.0 50.203.110.233 The router seems to understand this. When on the CLI via serial cable, I can ping 8.8.8.8 and hear responses from Google DNS. Woo hoo! The issue arrives when any client sits on the IP Block 2 side. I configured my client with a static IP of 50.202.219.15/27, default gateway 50.202.219.20. I can ping myself. I can ping the near side of the router (50.202.219.20), and I can ping the far side of the router (50.203.110.237. I cannot ping anything else in IP block 1, nor can I ping 8.8.8.8. Here is my configuration file: <HP>display current-configuration # version 5.20.106, Release 2507, Standard # sysname HP # domain default enable system # dar p2p signature-file flash:/p2p_default.mtd # port-security enable # undo ip http enable # password-recovery enable # vlan 1 # domain system access-limit disable state active idle-cut disable self-service-url disable # user-group system group-attribute allow-guest # local-user admin password cipher $c$3$40gC1cxf/wIJNa1ufFPJsjKAof+QP5aV authorization-attribute level 3 service-type telnet # cwmp undo cwmp enable # interface Aux0 async mode flow link-protocol ppp # interface Cellular0/0 async mode protocol link-protocol ppp # interface Ethernet0/0 port link-mode route ip address 50.203.110.237 255.255.255.248 # interface Ethernet0/1 port link-mode route ip address 50.202.219.20 255.255.255.224 # interface NULL0 # ip route-static 0.0.0.0 0.0.0.0 50.203.110.233 permanent # load xml-configuration # load tr069-configuration # user-interface tty 12 user-interface aux 0 user-interface vty 0 4 authentication-mode scheme # My guess right now is there is some sort of "permission" needed to use the default route. The manuals haven't turned up a lot in this area that don't make the situation much more complicated (but maybe it needs to be more complicated?) Background: we use HP switches, and I love the CLI. I bought HP thinking the command line interface would be similar, or at least speak the same language. Whoops! I'd be happy to provide more information or perform any additional tests. Thanks in advance! Update 1: The manual mentions routing rules. I hadn't previously added these (since our Cisco 2801 seems to route anything by default). I added: ip ip-prefix 1 permit 0.0.0.0 0 less-equal 32 alas, still no dice.

    Read the article

  • "No route to host" with ssl but not with telnet

    - by Clemens Bergmann
    I have a strange problem with connecting to a https site from one of my servers. When I type: telnet puppet 8140 I am presented with a standard telnet console and can talk to the Server as always: Connected to athena.hidden.tld. Escape character is '^]'. GET / HTTP/1.1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://athena.hidden.tld:8140/"><b>https://athena.hidden.tld:8140/</b></a></blockquote></p> <hr> <address>Apache/2.2.16 (Debian) Server at athena.hidden.tld Port 8140</address> </body></html> Connection closed by foreign host. But when I try to connect to the same host and port with ssl: openssl s_client -connect puppet:8140 It is not working connect: No route to host connect:errno=113 I am confused. At first it sounded like a firewall problem but this could not be, could it? Because this would also prevent the telnet connection. As Firewall I am using ferm on both servers. The systems are debian squeeze vm-boxes. [edit 1] Even when I try to connect directly with the IP address: openssl s_client -connect 198.51.100.1:8140 #address exchanged connect: No route to host connect:errno=113 Bringing down the firewalls on both hosts with service ferm stop is also not helping. But when I do openssl s_client -connect localhost:8140 on the server machine it is connecting fine. [edit 2] if I connect to the IP with telnet it also is not working. telnet 198.51.100.1 8140 Trying 198.51.100.1... telnet: Unable to connect to remote host: No route to host The confusion might come from IPv6. I have IPv6 on all my hosts. It seems that telnet uses IPv6 by default and this works. For example: telnet -6 puppet 8140 works but telnet -4 puppet 8140 does not work. So there seems to be a problem with the IPv4 route. openssl seems to only (or by default) use IPv4 and therefore fails but telnet uses IPv6 and succeeds.

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • WordPress 3.5 Multisite and nginx siteurl issues

    - by Florin Gogianu
    I'm setting up multisite on localhost in subdirectories. The problem is that when I'm trying to access the dashboard of a site I just created ( localhost/wptest/site/wp-admin ) I get "This webpage has a redirect loop" and when I try to access the actual website ( localhost/wptest/site ) the page loads but without assets, such as css. When I access the network dashboard, or the primary site dashboard on localhost/wptest everything is just fine. Also when I edit the permalink of the second site in the network dashboard, to be like this: localhost/site it also runs fine. How to make it work with the default permalink structure localhost/wptest/site? The wordpress files are in /usr/share/html/wptest The wp-config.php is as follows: define('WP_ALLOW_MULTISITE', true); define('MULTISITE', true); define('SUBDOMAIN_INSTALL', false); define('DOMAIN_CURRENT_SITE', 'localhost'); define('PATH_CURRENT_SITE', '/wptest/'); define('SITE_ID_CURRENT_SITE', 1); define('BLOG_ID_CURRENT_SITE', 1); And the server block / virtual host is like this: server { ##DM - uncomment following line for domain mapping listen 80 default_server; #server_name example.com *.example.com ; ##DM - uncomment following line for domain mapping #server_name_in_redirect off; access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; root /usr/share/nginx/html/wptest; index index.html index.htm index.php; if (!-e $request_filename) { rewrite /wp-admin$ $scheme://$host$uri/ permanent; rewrite ^(/[^/]+)?(/wp-.*) $2 last; rewrite ^(/[^/]+)?(/.*\.php) $2 last; } location / { try_files $uri $uri/ /index.php?$args ; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock; } location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /\. { deny all; access_log off; log_not_found off; } } And finally here's an error log: 2013/06/29 08:05:37 [error] 4056#0: *52 rewrite or internal redirection cycle while internally redirecting to "/index.php", client: 127.0.0.1, server: example.com, request: "GET /nginx HTTP/1.1", host: "localhost"

    Read the article

  • What are incentives (if any) to use WinRT instead of .Net?

    - by Ark-kun
    Let's compare WinRT with .Net .Net .Net is the 13+ years evolution of COM. Three main parts of .Net are execution environment, standard libraries and supported languages. CLR is the native-code execution environment based on COM .Net Framework has a big set of standard libraries (implemented using managed and native code) that can be used from all .Net languages. There are .Net classes that allow using OS APIs. WPF or Silverlight provide a XAML-based UI framework .Net can be used with C++, C#, Javascript, Python, Ruby, VB, LISP, Scheme and many other languages. C++/.Net is a variation of the C++ language that allows interaction with .Net objects. .Net supports inheritance, generics, operator and method overloading and many other features. .Net allows creating apps that run on Windows (XP, 7, 8 Pro (Desktop and Metro), RT, CE, etc), Mac OS, Linux (+ other *nix); iOS, Android, Windows Phone (7, 8); Internet Explorer, Chrome, Firefox; XBox 360, Playstation Suite; raw microprocessors. There is support for creating games (2D/3D) using any managed language or C++. Created by Developer Division WinRT WinRT is based on COM. Three main parts of WinRT are execution environment, standard libraries and supported languages. WinRT has a native-code execution environment based on COM WinRT has a set of standard libraries that more or less can be used from WinRT languages. There are WinRT classes that allow using OS APIs. Unnamed Silverlight clone provides a XAML-based UI framework WinRT can be used with C++, C#, Javascript, VB. C++/CX is a variation of the C++ language that allows interaction with WinRT objects. Custom WinRT components don't support inheritance (classes must be sealed), generics, operator overloading and many other features. WinRT allows creating apps that run on Windows 8 Pro and RT (Metro only); Windows Phone 8 (limited). There is support for creating games (2D/3D) using C++ only. Ordered by Windows Team I think that all the aspects except the last ones are very important for developers. On the other hand it seems that the most important aspect for Microsoft is the last one. So, given the above comparison of conceptually identical technologies, what are incentives (if any) to use WinRT instead of .Net?

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • If Nvidia Shield can stream a game via wifi, why can I not do the same via ethernet to any other PC?

    - by Enigma
    I think it absurd that a wireless game streaming solution is the *first to hit the market when a 1000mbps+ Ethernet connection would accomplish the same feat with roughly 6x the available bandwidth. I can only assume that there must be some reason behind this or a limitation preventing this, but what? 150mbps wifi is in no way superior to a 1000mbps LAN connection aside from well wireless mobility. Not only that but I have a secondary laptop and desktop which should by hardware comparison completely outperform anything the Tegra in the Nvidia Shield can do. Is this all just a marketing scheme to force people to buy the shield for the streaming benefit? Chief among these is that NVIDIA’s Shield handheld game console will be getting a microconsole-like mode, dubbed “Shield Console Mode”, that will allow the handheld to be converted into a more traditional TV-connected console. In console mode Shield can be controlled with a Bluetooth controller, and in accordance with the higher resolution of TVs will accept 1080p game streaming from a suitably equipped PC, versus 720p in handheld mode. With that said 1080p streaming will require additional bandwidth, and while 720p can be done over WiFi NVIDIA will be requiring a hardline GigE connection for 1080p streaming (note that Shield doesn’t have Ethernet, so this is presumably being done over USB). Streaming aside, in console mode Shield will also support its traditional local gaming/application functionality. - http://www.anandtech.com/show/7435/nvidia-consolidates-game-streaming-tech-under-gamestream-brand-announces-shield-console-mode ^ This is not acceptable for me for a number of reasons not to mention the ridiculousness of having a little screen+controller unit sitting there while using a secondary controller and screen instead. That kind of redundant absurdity exemplifies how wrong of a solution that is. They need a second product for this solution without the screen or controller for it to make sense... at which point your just buying a little computer that does what most other larger computers do better. All that is required, by my understanding, is the ability to decode H.264 video compression and transmit control/feedback so by any logical comparison, one (Nvidia especially) should have no difficulty in creating an application for PC's (win32/64 environment) that does the exact same thing their android app does. I have 2 video cards capable of streaming (encoding) H.264 so by right they must be capable of decoding it I would think. I haven't found anything stating plans to allow non-shield owners to do this. Can a third party create this software or does it hinge on some limitation that only Nvidia can overcome? (*) - perhaps this isn't the first but afaik it is the first complete package.

    Read the article

  • Incremental RPM package version "numbers" for x.y.z > x.y.z-beta (or alpha, rc, etc)

    - by Jonathan Clarke
    In order to publish RPM packages of several different versions of some software, I'm looking for a way to specify version "numbers" that are considered "upgrades", and include the differentiation of several pre-release versions, such as (in order): "2.4.0 alpha 1", "2.4.0 alpha 2", "2.4.0 alpha 3", "2.4.0 beta 1", "2.4.0 beta 2", "2.4.0 release candidate", "2.4.0 final", "2.4.1", "2.4.2", etc. The main issue I have with this is that RPM considers that "2.4.0" comes earlier than "2.4.0.alpha1", so I can't just add the suffix on the end of the final version number. I could try "2.4.0.alpha1", "2.4.0.beta1", "2.4.0.final", which would work, except for the "release candidate" that would be considered later than "2.4.0.final". An alternative I considered is using the "epoch:" section of the RPM version number (the epoch: prefix is considered before the main version number so that "1:2.4.0" is actually earlier than "2:1.0.0"). By putting a timestamp in the epoch: field, all the versions get ordered as expected by RPM, because their versions appear to increment in time. However, this fails when new releases are made on several major versions at the same time (for example, 2.3.2 is released after 2.4.0, but their version for RPM are "20121003:2.3.2" and "20120928:2.4.0" and systems on 2.3.2 can't get "upgraded" to 2.4.0, because rpm sees it as an older version). In this case, yum/zypper/etc refuse to upgrade to 2.4.0, thus my problem. What version numbers can I use to achieve this, and make sure that RPM always considers the version numbers to be in order. Or if not version numbers, other mechanism in RPM packaging? Note 1: I would like to keep the "Release:" field of the spec file for it's original purpose (several releases of packages, including packaging changes, for the same version of the packaged software). Note 2: This should work on current production versions of major distributions, such as RHEL/CentOS 6 and SLES 11. But I'm interested in solutions that don't, too, so long as they don't involve recompiling rpm! Note 3: On Debian-like systems, dpkg uses a special component in the version number which is the "~" (tilde) character. This causes dpkg to count the suffix as "negative" ordering, so that "2.4.0~anything" will come before "2.4.0". Then, normal ordering applies after the "~", so "2.4.0~alpha1" comes before "2.4.0~beta1" because "alpha" comes before "beta" alphabetically. I'm not necessarily looking to use the same scheme for RPM packages (I'm pretty sure no such equivalent exists), so this is just FYI.

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Database model for keeping track of likes/shares/comments on blog posts over time

    - by gage
    My goal is to keep track of the popular posts on different blog sites based on social network activity at any given time. The goal is not to simply get the most popular now, but instead find posts that are popular compared to other posts on the same blog. For example, I follow a tech blog, a sports blog, and a gossip blog. The tech blog gets waaay more readership than the other two blogs, so in raw numbers every post on the tech blog will always out number views on the other two. So lets say the average tech blog post gets 500 facebook likes and the other two get an average of 50 likes per post. Then when there is a sports blog post that has 200 fb likes and a gossip blog post with 300 while the tech blog posts today have 500 likes I want to highlight the sports and gossip blog posts (more likes than average vs tech blog with more # of likes but just average for the blog) The approach I am thinking of taking is to make an entry in a database for each blog post. Every x minutes (say every 15 minutes) I will check how many likes/shares/comments an entry has received on all the social networks (facebook, twitter, google+, linkeIn). So over time there will be a history of likes for each blog post, i.e post 1234 after 15 min: 10 fb likes, 4 tweets, 6 g+ after 30 min: 15 fb likes, 15 tweets, 10 g+ ... ... after 48 hours: 200 fb likes, 25 tweets, 15 g+ By keeping a history like this for each blog post I can know the average number of likes/shares/tweets at any give time interval. So for example the average number of fb likes for all blog posts 48hrs after posting is 50, and a particular post has 200 I can mark that as a popular post and feature/highlight it. A consideration in the design is to be able to easily query the values (likes/shares) for a specific time-frame, i.e. fb likes after 30min or tweets after 24 hrs in-order to compute averages with which to compare against (or should averages be stored in it's own table?) If this approach is flawed or could use improvement please let me know, but it is not my main question. My main question is what should a database scheme for storing this info look like? Assuming that the above approach is taken I am trying to figure out what a database schema for storing the likes over time would look like. I am brand new to databases, in doing some basic reading I see that it is advisable to make a 3NF database. I have come up with the following possible schema. Schema 1 DB Popular Posts Table: Post post_id ( primary key(pk) ) url title Table: Social Activity activity_id (pk) url (fk) type (i.e. facebook,twitter,g+) value timestamp This was my initial instinct (base on my very limited db knowledge). As far as I under stand this schema would be 3NF? I searched for designs of similar database model, and found this question on stackoverflow, http://stackoverflow.com/questions/11216080/data-structure-for-storing-height-and-weight-etc-over-time-for-multiple-users . The scenario in that question is similar (recording weight/height of users overtime). Taking the accepted answer for that question and applying it to my model results in something like: Schema 2 (same as above, but break down the social activity into 2 tables) DB Popular Posts Table: Post post_id (pk) url title Table: Social Measurement measurement_id (pk) post_id (fk) timestamp Table: Social stat stat_id (pk) measurement_id (fk) type (i.e. facebook,twitter,g+) value The advantage I see in schema 2 is that I will likely want to access all the values for a given time, i.e. when making a measurement at 30min after a post is published I will simultaneous check number of fb likes, fb shares, fb comments, tweets, g+, linkedIn. So with this schema it may be easier get get all stats for a measurement_id corresponding to a certain time, i.e. all social stats for post 1234 at time x. Another thought I had is since it doesn't make sense to compare number of fb likes with number of tweets or g+ shares, maybe it makes sense to separate each social measurement into it's own table? Schema 3 DB Popular Posts Table: Post post_id (pk) url title Table: fb_likes fb_like_id (pk) post_id (fk) timestamp value Table: fb_shares fb_shares_id (pk) post_id (fk) timestamp value Table: tweets tweets__id (pk) post_id (fk) timestamp value Table: google_plus google_plus_id (pk) post_id (fk) timestamp value As you can see I am generally lost/unsure of what approach to take. I'm sure this typical type of database problem (storing measurements overtime, i.e temperature statistic) that must have a common solution. Is there a design pattern/model for this, does it have a name? I tried searching for "database periodic data collection" or "database measurements over time" but didn't find anything specific. What would be an appropriate model to solve the needs of this problem?

    Read the article

  • Oracle Open World 2012: SQL Developer Recap

    - by thatjeffsmith
    Last week was the ‘big show’ in San Francisco. I was very happy to meet many of you in person. And many of you had questions – lots of questions! We had full or overflowing rooms for our sessions and hands-on-labs. The SQL Developer ‘booths’ were also slammed several times. So exciting to see so many of YOU excited about SQL Developer. It’s very cool to hear the stories of our tools saving you and your organizations so much time (and money!) Instead of doing a Day 0 – Day 9 recap, I thought I’d share with you the questions that I heard more than once. And just for giggles, I’ll throw in some answers as well So in no particular order… What’s the difference between Oracle SQL Developer & Oracle SQL Developer Data Modeler? Mathematically speaking – two words. But as far as the actual modeling features go, there’s no difference between the two applications. The same ‘code’ or features as it pertains to data modeling and design are in both tools. However, in SQL Developer you have all of the OTHER features fighting for real estate in the UI. So I have a general rule of thumb – if you spend MOST of your time in the database, use SQL Developer. And if you spend most of your time in the data model, run the separate and dedicated program, Oracle SQL Developer Data Modeler. Here’s a couple of screenshots to drive home the UI point: Oracle SQL Developer Oracle SQL Developer Data Modeler running INSIDE of SQL Developer. Notice how the Modeler menu items fold under the file menu? Oracle SQL Developer Data Modeler Easier to navigate and manipulate your models with the stand alone modeler. Just no worksheet to run your ad-hoc queries, etc. Don’t forget you can disable the Data Modeler inside of SQL Developer via the Extensions preference page. How can I model my table partitions? Partitioning is defined via the Physical model. So after you have finished your relational model, you need to generate a physical model. Oracle SQL Developer Data Modeler Physical Model and Partitioning Open the properties for your physical model table. Enable the ‘partitioned’ property. Once you do so, the ‘Partitioning’ page will activate. Lots and lots of partitioning support and options here But what about Interval Partitioning? An extension of range partitioning in 11gR2, we don’t currently support this partitioning scheme in SQL Developer. But we’re working on it! Can SQL Developer ignore column order when comparing models? Yes! After you start a model compare, one of your options is to disregard the order of an attribute or column definition. Tell SQL Developer you don’t care when your column shows up, just as long as it DOES show up. Wow, you got a lot of questions around modeling! Is that normal? Yes! While we appreciate that many folks inherit their applications and associated designs, new applications are being ‘born’ every day. Since both of our tools are free for anyone to design their new Oracle applications with, we attract a fair amount of attention I want to do a Hands On Lab. How do I get your software and instructional guides? Go here. Download VirtualBox. Then download the VB image. Import the appliance. Start it. Connect oracle/oracle on the OEL VM. Click on ‘Start Here’ in the desktop. Follow the instructions. If you need help, ask away! You went too fast in your Tips & Tricks session. Do you have cliff notes? Yes! And you’re SO close to finding them! Just go to my SQL Developer resources page. All of my tips are documented on this blog somewhere. I’ve indexed the most popular ones on the resource page. You can use the Search dialog on the right to find the rest. Or just send me a comment or question, and I’ll do my best to answer them as they come in.

    Read the article

  • Developer Training – Employee Morals and Ethics – Part 2

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 If you have been reading this series of posts about Developer Training, you can probably determine where my mind lies in the matter – firmly “pro.”  There are many reasons to think that training is an excellent idea for the company.  In the end, it may seem like the company gets all the benefits and the employee has just wasted a few hours in a dark, stuffy room.  However, don’t let yourself be fooled, this is not the case! Training, Company and YOU! Do not forget, that as an employee, you are your company’s best asset.  Training is meant to benefit the company, of course, but in the end, YOU, the employee, is the one who walks away with a lot of useful knowledge in your head.  This post will discuss what to do with that knowledge, how to acquire it, and who should pay for it. Eternal Question – Who Pays for Training? When the subject of training comes up, money is often the sticky issue.  Some companies will argue that because the employee is the one who benefits the most, he or she should pay for it.  Of course, whenever money is discuss, emotions tend to follow along, and being told you have to pay money for mandatory training often results in very unhappy employees – the opposite result of what the training was supposed to accomplish.  Therefore, many companies will pay for the training.  However, if your company is reluctant to pay for necessary training, or is hesitant to pay for a specific course that is extremely expensive, there is always the art of compromise.  The employee and the company can split the cost of the training – after all, both the company and the employee will be benefiting. [Click on following image to answer important question] Click to Enlarge  This kind of “hybrid” pay scheme can be split any way that is mutually beneficial.  There is the obvious 50/50 split, but for extremely expensive classes or conferences, this still might be prohibitively expensive for the employee.  If you are facing this situation, here are some example solutions you could suggest to your employer:  travel reimbursement, paid leave, payment for only the tuition.  There are even more complex solutions – the company could pay back the employee after the training and project has been completed. Training is not Vacation Once the classes have been settled on, and the question of payment has been answered, it is time to attend your class or travel to your conference!  The first rule is one that your mothers probably instilled in you as well – have a good attitude.  While you might be looking forward to your time off work, going to an interesting class, hopefully with some friends and coworkers, but do not mistake this time as a vacation.  It can be tempting to only have fun, but don’t forget to learn as well.  I call this “attending sincerely.”  Pay attention, have an open mind and good attitude, and don’t forget to take notes!  You might be surprised how many people will want to see what you learned when you go back. Report Back the Learning When you get back to work, those notes will come in handy.  Your supervisor and coworkers might want you to give a short presentation about what you learned.  Attending these classes can make you almost a celebrity.  Don’t be too nervous about these presentations, and don’t feel like they are meant to be a test of your dedication.  Many people will be genuinely curious – and maybe a little jealous that you go to go learn something new.  Be generous with your notes and be willing to pass your learning on to others through mini-training sessions of your own. [Click on following image to answer important question] Click to Enlarge Practice New Learning On top of helping to train others, don’t forget to put your new knowledge to use!  Your notes will come in handy for this, and you can even include your plans for the future in your presentation when you return.  This is a good way to demonstrate to your bosses that the money they paid (hopefully they paid!) is going to be put to good use. Feedback to Manager When you return, be sure to set aside a few minutes to talk about your training with your manager.  Be perfectly honest – your manager wants to know the good and the bad.  If you had a truly miserable time, do not lie and say it was the best experience – you and others may be forced to attend the same training over and over again!  Of course, you do not want to sound like a complainer, so make sure that your summary includes the good news as well.  Your manager may be able to help you understand more of what they wanted you to learn, too. Win-Win Situation In the end, remember that training is supposed to be a benefit to the employer as well as the employee.  Make sure that you share your information and that you give feedback about how you felt the sessions went as well as how you think this training can be implemented at the company immediately. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Play Your Favorite DOS Games in XP, Vista, and Windows 7

    - by Matthew Guay
    Want to take a trip down memory lane with old school DOS games?  D-Fend Reloaded makes it easy for you to play your favorite DOS games directly on XP, Vista, and Windows 7. D-Fend Reloaded is a great frontend for DOSBox, the popular DOS emulator.  It lets you install and run many DOS games and applications directly from its interface without ever touching a DOS prompt.  It works great on XP, Vista, and Windows 7 32 & 64-bit versions.   Getting Started Download D-Fend Reloaded (link below), and install with the default settings.  You don’t need to install DOSBox, as D-Fend Reloaded will automatically install all the components you need to run DOS games on Windows. D-Fend Reloaded can also be installed as a portable application, so you can run it from a flash drive on any Windows computer by selecting User defined installation. Then select Portable mode installation. Once D-Fend Reloaded is installed, you can go ahead and open the program. Then simply click “Accept all settings” to apply the default settings.   D-Fend is now ready to run all of your favorite DOS games. Installing DOS Games and Applications: To install a DOS game or application, simply drag-and-drop a zip file of the app into D-Fend Reloaded’s window.  D-Fend Reloaded will automatically extract the program… Then will ask you to name the application and choose where to store it — by default it uses the name of the DOS app. Now you’ll see a new entry for the app you just installed.  Simply double-click to run it.   D-Fend will remind you that you can switch out of fullscreen mode by pressing Alt+Enter, and can also close the DOS application by pressing Ctrl+F9.  Press Ok to run the program. Here we’re running Ms. PacPC, a remake of the classic game Ms. Pac-Man, in full-screen mode.  All features work automatically, including sound, and you never have to setup anything from DOS command line — it just works. Here it’s in windowed mode running on Windows 7. Please note that your color scheme may change to Windows Basic while running DOS applications. You can run DOS application just as easily.  Here’s Word 5.5 running in in DOSBox through D-Fend Reloaded… Game Packs: Want to quickly install many old DOS freeware and trial games?  D-Fend Reloaded offers several game packs that let you install dozens of DOS games with only four clicks…just download and run the game pack installer of your choice (link below). Now you’ve got a selection of DOS games to choose from. Here’s a group of poor lemmings walking around … in Windows 7. Conclusion D-Fend Reloaded gives you a great way to run your favorite DOS games and applications directly from XP, Vista, and Windows 7.  Give it a try, and relive your DOS days from the comfort of your Windows desktop. What were some of your favorite DOS games and applications? Leave a comment and let us know. Links Download D-Fend Reloaded Download DOS game packs for D-Fend Reloaded Download Ms. Pac-PC Similar Articles Productive Geek Tips Friday Fun: Get Your Mario OnFriday Fun: Go Retro with PacmanThursday’s Pre-Holiday Lazy Links RoundupFriday Fun: Five More Time Wasting Online GamesFriday Fun: Holiday Themed Games TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional The Growth of Citibank Quickly Switch between Tabs in IE Windows Media Player 12: Tweak Video & Sound with Playback Enhancements Own a cell phone, or does a cell phone own you? Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier

    Read the article

  • Partition Wise Joins

    - by jean-pierre.dijcks
    Some say they are the holy grail of parallel computing and PWJ is the basis for a shared nothing system and the only join method that is available on a shared nothing system (yes this is oversimplified!). The magic in Oracle is of course that is one of many ways to join data. And yes, this is the old flexibility vs. simplicity discussion all over, so I won't go there... the point is that what you must do in a shared nothing system, you can do in Oracle with the same speed and methods. The Theory A partition wise join is a join between (for simplicity) two tables that are partitioned on the same column with the same partitioning scheme. In shared nothing this is effectively hard partitioning locating data on a specific node / storage combo. In Oracle is is logical partitioning. If you now join the two tables on that partitioned column you can break up the join in smaller joins exactly along the partitions in the data. Since they are partitioned (grouped) into the same buckets, all values required to do the join live in the equivalent bucket on either sides. No need to talk to anyone else, no need to redistribute data to anyone else... in short, the optimal join method for parallel processing of two large data sets. PWJ's in Oracle Since we do not hard partition the data across nodes in Oracle we use the Partitioning option to the database to create the buckets, then set the Degree of Parallelism (or run Auto DOP - see here) and get our PWJs. The main questions always asked are: How many partitions should I create? What should my DOP be? In a shared nothing system the answer is of course, as many partitions as there are nodes which will be your DOP. In Oracle we do want you to look at the workload and concurrency, and once you know that to understand the following rules of thumb. Within Oracle we have more ways of joining of data, so it is important to understand some of the PWJ ideas and what it means if you have an uneven distribution across processes. Assume we have a simple scenario where we partition the data on a hash key resulting in 4 hash partitions (H1 -H4). We have 2 parallel processes that have been tasked with reading these partitions (P1 - P2). The work is evenly divided assuming the partitions are the same size and we can scan this in time t1 as shown below. Now assume that we have changed the system and have a 5th partition but still have our 2 workers P1 and P2. The time it takes is actually 50% more assuming the 5th partition has the same size as the original H1 - H4 partitions. In other words to scan these 5 partitions, the time t2 it takes is not 1/5th more expensive, it is a lot more expensive and some other join plans may now start to look exciting to the optimizer. Just to post the disclaimer, it is not as simple as I state it here, but you get the idea on how much more expensive this plan may now look... Based on this little example there are a few rules of thumb to follow to get the partition wise joins. First, choose a DOP that is a factor of two (2). So always choose something like 2, 4, 8, 16, 32 and so on... Second, choose a number of partitions that is larger or equal to 2* DOP. Third, make sure the number of partitions is divisible through 2 without orphans. This is also known as an even number... Fourth, choose a stable partition count strategy, which is typically hash, which can be a sub partitioning strategy rather than the main strategy (range - hash is a popular one). Fifth, make sure you do this on the join key between the two large tables you want to join (and this should be the obvious one...). Translating this into an example: DOP = 8 (determined based on concurrency or by using Auto DOP with a cap due to concurrency) says that the number of partitions >= 16. Number of hash (sub) partitions = 32, which gives each process four partitions to work on. This number is somewhat arbitrary and depends on your data and system. In this case my main reasoning is that if you get more room on the box you can easily move the DOP for the query to 16 without repartitioning... and of course it makes for no leftovers on the table... And yes, we recommend up-to-date statistics. And before you start complaining, do read this post on a cool way to do stats in 11.

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • Dual Boot Oracle Solaris 11/11 and Linux (Ubuntu 11.10/grub2)

    - by HartmutStreppel
    After having worked with Open Solaris on my laptop first, then with an upgrade to Oracle Solaris 11 Express, I finally did a fresh install of Oracle Solaris 11/11, when it became available. I am not a big fan of upgrades as I know that I am not the perfect administrator and my system gets spoiled with unclean configurations, outdated packages and wrong settings that cannot be reversed. So I prefer to start from scratch. Especially with Oracle Solaris 11 I wanted to have a system just like a customer would have it in production. The installation was smooth - more or less, if I had only read the documentation a bit better in advance. For a number of reasons I prefer a dual boot system. The most important one is, that especially with mobile devices you often run into network problems. And you have a hard time figuring out where the problem is: in your laptop hardware, in the OS you are running, or really within the network. If you have an alternate OS to boot, you can exclude the OS and your hardware. This makes you feel better. The second OS should be a Linux variant - and for some not so obvious reason I decided to go with the latest Ubuntu release (11.10). It replaced a very old Open Suse installation that had not been booted for a while. I knew that it was probably best to install Ubuntu first and then Oracle Solaris 11, as this would put the right boot information for Oracle Solaris  into the MBR and onto the root partition. But then, how to enable dual boot with the 2 OSes. Searching the web one mainly finds information about dual boot of: Linux and Linux Linux and Windows I do not want to explain which wrong configurations I worked through, but I prefer to explain the final setup, which is extremely simple, and I am wondering why this is not covered as the easiest solution for most dual boot setups. I use chainloader from and to both OS'es, with the only disadvantage that I have to confirm two grub menus each time I want to boot the "other" OS. Still there were some hurdles to jump over: Ubuntu did not like getting its boot blocks being placed on the partition instead of the disk; I must admit that I do not fully understand why. But using the --force option you could get that done Ubuntu needs an active partition; that was easy to achieve grub2 uses a different numbering scheme for the partitions. That is in the docs, if you read them. BTW: The usual disclaimer is valid. There is  no guarantee that what I describe works or works well. Please back up your data carefully before trying any of this. So, Oracle Solaris 11 is installed on the first partition and Ubuntu on the third. With Ubtuntu things initially were a bit more complicated, as I did not know how to boot it. And the live CD did not offer the capability to boot the on-disk image (at least I did not find it). So I booted the live CD, mounted the Ubuntu installation at /mnt and wrote the boot blocks into the partition. This is something that does not seem to be recommended, at least grub-install refrained from doing what I intended. After a bit more research I was bold enough to use the --force option and wrote the boot blocks to /dev/sda3 using grub-install --boot-directory=/mnt/boot --force --no-floppy /dev/sda3 So, I now had a system with the Solaris boot loader in the MBR, Solaris specific boot blocks on the Solaris root partition and Ubuntu specific boot blocks in the Ubuntu partition. I just had to chain them together and I was done. Oracle Solaris 11: I have added the following lines to /rpool/boot/grub/menu.lst (be aware of the /rpool!!!!) title Ubuntu 11.10root (hd0,2)makeactivechainloader +1boot The Ubuntu root file system sits on the third partition (/dev/sda3). Ubuntu: I have added the following lines to /etc/grub.d/40_custom: menuentry "Solaris 11/11" {      set root=(hd0,1)      chainloader +1} Two things need to be mentioned: a) grub2 starts numbering partitions with 1; so my /dev/sda1 is partition 1. b) Oracle Solaris boots without the partition being made active (btw: the command to make a partition active with grub2 is "parttool (hd0,1) boot+", which currently does not work for me). As debugging grub is a bit complicated, I used the grub CLI to perform some tests and also used a tool, that I found on sourceforge.net that was able to prepare a list of all boot loaders on all partitions. This told me that the basic setup was correct. Unfortunately I lost it in the live CD environment. I hope this is helpful for some of the readers.Hartmut

    Read the article

  • Visual Studio Little Wonders: Box Selection

    - by James Michael Hare
    So this week I decided I’d do a Little Wonder of a different kind and focus on an underused IDE improvement: Visual Studio’s Box Selection capability. This is a handy feature that many people still don’t realize was made available in Visual Studio 2010 (and beyond).  True, there have been other editors in the past with this capability, but now that it’s fully part of Visual Studio we can enjoy it’s goodness from within our own IDE. So, for those of you who don’t know what box selection is and what it allows you to do, read on! Sometimes, we want to select beyond the horizontal… The problem with traditional text selection in many editors is that it is horizontally oriented.  Sure, you can select multiple rows, but if you do you will pull in the entire row (at least for the middle rows).  Under the old selection scheme, if you wanted to select a portion of text from each row (a “box” of text) you were out of luck.  Box selection rectifies this by allowing you to select a box of text that bounded by a selection rectangle that you can grow horizontally or vertically.  So let’s think a situation that could occur where this comes in handy. Let’s say, for instance, that we are defining an enum in our code that we want to be able to translate into some string values (possibly to be stored in a database, output to screen, etc.). Perhaps such an enum would look like this: 1: public enum OrderType 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: } 8:  Now, let’s say we are in the process of creating a Dictionary<K,V> to translate our OrderType: 1: var translator = new Dictionary<OrderType, string> 2: { 3: // do I really want to retype all this??? 4: }; Yes the example above is contrived so that we will pull some garbage if we do a multi-line select. I could select the lines above using the traditional multi-line selection: And then paste them into the translator code, which would result in this: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, // buy shares of a commodity 4: Sell, // sell shares of a commodity 5: Exchange, // exchange one commodity for another 6: Cancel, // cancel an order for a commodity 7: }; But I have a lot of junk there, sure I can manually clear it out, or use some search and replace magic, but if this were hundreds of lines instead of just a few that would quickly become cumbersome. The Box Selection Now that we have the ability to create box selections, we can select the box of text to delete!  Most of us are familiar with the fact we can drag the mouse (or hold [Shift] and use the arrow keys) to create a selection that can span multiple rows: Box selection, however, actually allows us to select a box instead of the typical horizontal lines: Then we can press the [delete] key and the pesky comments are all gone! You can do this either by holding down [Alt] while you select with your mouse, or by holding down [Alt+Shift] and using the arrow keys on the keyboard to grow the box horizontally or vertically. So now we have: 1: var translator = new Dictionary<OrderType, string> 2: { 3: Buy, 4: Sell, 5: Exchange, 6: Cancel, 7: }; Which is closer, but we still need an opening curly, the string to translate to, and the closing curly and comma. Fortunately, again, this is easy with box selections due to the fact box selection can even work for a zero-width selection! That is, hold down [Alt] and either drag down with no width, or hold down [Alt+Shift] and arrow down and you will define a selection range with no width, essentially, a vertical line selection: Notice the faint selection line on the right? So why is this useful? Well, just like with any selected range, we can type and it will replace the selection. What does this mean for box selections? It means that we can insert the same text all the way down on each line! If we have the same selection above, and type a curly and a space, we’d get: Imagine doing this over hundreds of lines and think of what a time saver it could be! Now make a zero-width selection on the other side: And type a curly and a comma, and we’d get: So close! Now finally, imagine we’ve already defined these strings somewhere and want to paste them in: 1: const private string BuyText = "Buy Shares"; 2: const private string SellText = "Sell Shares"; 3: const private string ExchangeText = "Exchange"; 4: const private string CancelText = "Cancel"; We can, again, use our box selection to pull out the constant names: And clicking copy (or [CTRL+C]) and then selecting a range to paste into: And finally clicking paste (or [CTRL+V]) to get the final result: 1: var translator = new Dictionary<OrderType, string> 2: { 3: { Buy, BuyText }, 4: { Sell, SellText }, 5: { Exchange, ExchangeText }, 6: { Cancel, CancelText }, 7: };   Sure, this was a contrived example, but I’m sure you’ll agree that it adds myriad possibilities of new ways to copy and paste vertical selections, as well as inserting text across a vertical slice. Summary: While box selection has been around in other editors, we finally get to experience it in VS2010 and beyond. It is extremely handy for selecting columns of information for cutting, copying, and pasting. In addition, it allows you to create a zero-width vertical insertion point that can be used to enter the same text across multiple rows. Imagine the time you can save adding repetitive code across multiple lines!  Try it, the more you use it, the more you’ll love it! Technorati Tags: C#,CSharp,.NET,Visual Studio,Little Wonders,Box Selection

    Read the article

  • Using Hadooop (HDInsight) with Microsoft - Two (OK, Three) Options

    - by BuckWoody
    Microsoft has many tools for “Big Data”. In fact, you need many tools – there’s no product called “Big Data Solution” in a shrink-wrapped box – if you find one, you probably shouldn’t buy it. It’s tempting to want a single tool that handles everything in a problem domain, but with large, complex data, that isn’t a reality. You’ll mix and match several systems, open and closed source, to solve a given problem. But there are tools that help with handling data at large, complex scales. Normally the best way to do this is to break up the data into parts, and then put the calculation engines for that chunk of data right on the node where the data is stored. These systems are in a family called “Distributed File and Compute”. Microsoft has a couple of these, including the High Performance Computing edition of Windows Server. Recently we partnered with Hortonworks to bring the Apache Foundation’s release of Hadoop to Windows. And as it turns out, there are actually two (technically three) ways you can use it. (There’s a more detailed set of information here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx, I’ll cover the options at a general level below)  First Option: Windows Azure HDInsight Service  Your first option is that you can simply log on to a Hadoop control node and begin to run Pig or Hive statements against data that you have stored in Windows Azure. There’s nothing to set up (although you can configure things where needed), and you can send the commands, get the output of the job(s), and stop using the service when you are done – and repeat the process later if you wish. (There are also connectors to run jobs from Microsoft Excel, but that’s another post)   This option is useful when you have a periodic burst of work for a Hadoop workload, or the data collection has been happening into Windows Azure storage anyway. That might be from a web application, the logs from a web application, telemetrics (remote sensor input), and other modes of constant collection.   You can read more about this option here:  http://blogs.msdn.com/b/windowsazure/archive/2012/10/24/getting-started-with-windows-azure-hdinsight-service.aspx Second Option: Microsoft HDInsight Server Your second option is to use the Hadoop Distribution for on-premises Windows called Microsoft HDInsight Server. You set up the Name Node(s), Job Tracker(s), and Data Node(s), among other components, and you have control over the entire ecostructure.   This option is useful if you want to  have complete control over the system, leave it running all the time, or you have a huge quantity of data that you have to bulk-load constantly – something that isn’t going to be practical with a network transfer or disk-mailing scheme. You can read more about this option here: http://www.microsoft.com/sqlserver/en/us/solutions-technologies/business-intelligence/big-data.aspx Third Option (unsupported): Installation on Windows Azure Virtual Machines  Although unsupported, you could simply use a Windows Azure Virtual Machine (we support both Windows and Linux servers) and install Hadoop yourself – it’s open-source, so there’s nothing preventing you from doing that.   Aside from being unsupported, there are other issues you’ll run into with this approach – primarily involving performance and the amount of configuration you’ll need to do to access the data nodes properly. But for a single-node installation (where all components run on one system) such as learning, demos, training and the like, this isn’t a bad option. Did I mention that’s unsupported? :) You can learn more about Windows Azure Virtual Machines here: http://www.windowsazure.com/en-us/home/scenarios/virtual-machines/ And more about Hadoop and the installation/configuration (on Linux) here: http://en.wikipedia.org/wiki/Apache_Hadoop And more about the HDInsight installation here: http://www.microsoft.com/web/gallery/install.aspx?appid=HDINSIGHT-PREVIEW Choosing the right option Since you have two or three routes you can go, the best thing to do is evaluate the need you have, and place the workload where it makes the most sense.  My suggestion is to install the HDInsight Server locally on a test system, and play around with it. Read up on the best ways to use Hadoop for a given workload, understand the parts, write a little Pig and Hive, and get your feet wet. Then sign up for a test account on HDInsight Service, and see how that leverages what you know. If you're a true tinkerer, go ahead and try the VM route as well. Oh - there’s another great reference on the Windows Azure HDInsight that just came out, here: http://blogs.msdn.com/b/brunoterkaly/archive/2012/11/16/hadoop-on-azure-introduction.aspx  

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • Computer Networks UNISA - Chap 15 &ndash; Network Management

    - by MarkPearl
    After reading this section you should be able to Understand network management and the importance of documentation, baseline measurements, policies, and regulations to assess and maintain a network’s health. Manage a network’s performance using SNMP-based network management software, system and event logs, and traffic-shaping techniques Identify the reasons for and elements of an asset managements system Plan and follow regular hardware and software maintenance routines Fundamentals of Network Management Network management refers to the assessment, monitoring, and maintenance of all aspects of a network including checking for hardware faults, ensuring high QoS, maintaining records of network assets, etc. Scope of network management differs depending on the size and requirements of the network. All sub topics of network management share the goals of enhancing the efficiency and performance while preventing costly downtime or loss. Documentation The way documentation is stored may vary, but to adequately manage a network one should at least record the following… Physical topology (types of LAN and WAN topologies – ring, star, hybrid) Access method (does it use Ethernet 802.3, token ring, etc.) Protocols Devices (Switches, routers, etc) Operating Systems Applications Configurations (What version of operating system and config files for serve / client software) Baseline Measurements A baseline is a report of the network’s current state of operation. Baseline measurements might include the utilization rate for your network backbone, number of users logged on per day, etc. Baseline measurements allow you to compare future performance increases or decreases caused by network changes or events with past network performance. Obtaining baseline measurements is the only way to know for certain whether a pattern of usage has changed, or whether a network upgrade has made a difference. There are various tools available for measuring baseline performance on a network. Policies, Procedures, and Regulations Following rules helps limit chaos, confusion, and possibly downtime. The following policies and procedures and regulations make for sound network management. Media installations and management (includes designing physical layout of cable, etc.) Network addressing policies (includes choosing and applying a an addressing scheme) Resource sharing and naming conventions (includes rules for logon ID’s) Security related policies Troubleshooting procedures Backup and disaster recovery procedures In addition to internal policies, a network manager must consider external regulatory rules. Fault and Performance Management After documenting every aspect of your network and following policies and best practices, you are ready to asses you networks status on an on going basis. This process includes both performance management and fault management. Network Management Software To accomplish both fault and performance management, organizations often use enterprise-wide network management software. There various software packages that do this, each collect data from multiple networked devices at regular intervals, in a process called polling. Each managed device runs a network management agent. So as not to affect the performance of a device while collecting information, agents do not demand significant processing resources. The definition of a managed devices and their data are collected in a MIB (Management Information Base). Agents communicate information about managed devices via any of several application layer protocols. On modern networks most agents use SNMP which is part of the TCP/IP suite and typically runs over UDP on port 161. Because of the flexibility and sophisticated network management applications are a challenge to configure and fine-tune. One needs to be careful to only collect relevant information and not cause performance issues (i.e. pinging a device every 5 seconds can be a problem with thousands of devices). MRTG (Multi Router Traffic Grapher) is a simple command line utility that uses SNMP to poll devices and collects data in a log file. MRTG can be used with Windows, UNIX and Linux. System and Event Logs Virtually every condition recognized by an operating system can be recorded. This is typically done using event logs. In Windows there is a GUI event log viewer. Similar information is recorded in UNIX and Linux in a system log. Much of the information collected in event logs and syslog files does not point to a problem, even if it is marked with a warning so it is important to filter your logs appropriately to reduce the noise. Traffic Shaping When a network must handle high volumes of network traffic, users benefit from performance management technique called traffic shaping. Traffic shaping involves manipulating certain characteristics of packets, data streams, or connections to manage the type and amount of traffic traversing a network or interface at any moment. Its goals are to assure timely delivery of the most important traffic while offering the best possible performance for all users. Several types of traffic prioritization exist including prioritizing traffic according to any of the following characteristics… Protocol IP address User group DiffServr VLAN tag in a Data Link layer frame Service or application Caching In addition to traffic shaping, a network or host might use caching to improve performance. Caching is the local storage of frequently needed files that would otherwise be obtained from an external source. By keeping files close to the requester, caching allows the user to access those files quickly. The most common type of caching is Web caching, in which Web pages are stored locally. To an ISP, caching is much more than just convenience. It prevents a significant volume of WAN traffic, thus improving performance and saving money. Asset Management Another key component in managing networks is identifying and tracking its hardware. This is called asset management. The first step to asset management is to take an inventory of each node on the network. You will also want to keep records of every piece of software purchased by your organization. Asset management simplifies maintaining and upgrading the network chiefly because you know what the system includes. In addition, asset management provides network administrators with information about the costs and benefits of certain types of hardware or software. Change Management Networks are always in a stage of flux with various aspects including… Software changes and patches Client Upgrades Shared Application Upgrades NOS Upgrades Hardware and Physical Plant Changes Cabling Upgrades Backbone Upgrades For a detailed explanation on each of these read the textbook (Page 750 – 761)

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >