Search Results

Search found 3617 results on 145 pages for 'physical wear'.

Page 141/145 | < Previous Page | 137 138 139 140 141 142 143 144 145  | Next Page >

  • SQL SERVER – Guest Posts – Feodor Georgiev – The Context of Our Database Environment – Going Beyond the Internal SQL Server Waits – Wait Type – Day 21 of 28

    - by pinaldave
    This guest post is submitted by Feodor. Feodor Georgiev is a SQL Server database specialist with extensive experience of thinking both within and outside the box. He has wide experience of different systems and solutions in the fields of architecture, scalability, performance, etc. Feodor has experience with SQL Server 2000 and later versions, and is certified in SQL Server 2008. In this article Feodor explains the server-client-server process, and concentrated on the mutual waits between client and SQL Server. This is essential in grasping the concept of waits in a ‘global’ application plan. Recently I was asked to write a blog post about the wait statistics in SQL Server and since I had been thinking about writing it for quite some time now, here it is. It is a wide-spread idea that the wait statistics in SQL Server will tell you everything about your performance. Well, almost. Or should I say – barely. The reason for this is that SQL Server is always a part of a bigger system – there are always other players in the game: whether it is a client application, web service, any other kind of data import/export process and so on. In short, the SQL Server surroundings look like this: This means that SQL Server, aside from its internal waits, also depends on external waits and settings. As we can see in the picture above, SQL Server needs to have an interface in order to communicate with the surrounding clients over the network. For this communication, SQL Server uses protocol interfaces. I will not go into detail about which protocols are best, but you can read this article. Also, review the information about the TDS (Tabular data stream). As we all know, our system is only as fast as its slowest component. This means that when we look at our environment as a whole, the SQL Server might be a victim of external pressure, no matter how well we have tuned our database server performance. Let’s dive into an example: let’s say that we have a web server, hosting a web application which is using data from our SQL Server, hosted on another server. The network card of the web server for some reason is malfunctioning (think of a hardware failure, driver failure, or just improper setup) and does not send/receive data faster than 10Mbs. On the other end, our SQL Server will not be able to send/receive data at a faster rate either. This means that the application users will notify the support team and will say: “My data is coming very slow.” Now, let’s move on to a bit more exciting example: imagine that there is a similar setup as the example above – one web server and one database server, and the application is not using any stored procedure calls, but instead for every user request the application is sending 80kb query over the network to the SQL Server. (I really thought this does not happen in real life until I saw it one day.) So, what happens in this case? To make things worse, let’s say that the 80kb query text is submitted from the application to the SQL Server at least 100 times per minute, and as often as 300 times per minute in peak times. Here is what happens: in order for this query to reach the SQL Server, it will have to be broken into a of number network packets (according to the packet size settings) – and will travel over the network. On the other side, our SQL Server network card will receive the packets, will pass them to our network layer, the packets will get assembled, and eventually SQL Server will start processing the query – parsing, allegorizing, generating the query execution plan and so on. So far, we have already had a serious network overhead by waiting for the packets to reach our Database Engine. There will certainly be some processing overhead – until the database engine deals with the 80kb query and its 20 subqueries. The waits you see in the DMVs are actually collected from the point the query reaches the SQL Server and the packets are assembled. Let’s say that our query is processed and it finally returns 15000 rows. These rows have a certain size as well, depending on the data types returned. This means that the data will have converted to packages (depending on the network size package settings) and will have to reach the application server. There will also be waits, however, this time you will be able to see a wait type in the DMVs called ASYNC_NETWORK_IO. What this wait type indicates is that the client is not consuming the data fast enough and the network buffers are filling up. Recently Pinal Dave posted a blog on Client Statistics. What Client Statistics does is captures the physical flow characteristics of the query between the client(Management Studio, in this case) and the server and back to the client. As you see in the image, there are three categories: Query Profile Statistics, Network Statistics and Time Statistics. Number of server roundtrips–a roundtrip consists of a request sent to the server and a reply from the server to the client. For example, if your query has three select statements, and they are separated by ‘GO’ command, then there will be three different roundtrips. TDS Packets sent from the client – TDS (tabular data stream) is the language which SQL Server speaks, and in order for applications to communicate with SQL Server, they need to pack the requests in TDS packets. TDS Packets sent from the client is the number of packets sent from the client; in case the request is large, then it may need more buffers, and eventually might even need more server roundtrips. TDS packets received from server –is the TDS packets sent by the server to the client during the query execution. Bytes sent from client – is the volume of the data set to our SQL Server, measured in bytes; i.e. how big of a query we have sent to the SQL Server. This is why it is best to use stored procedures, since the reusable code (which already exists as an object in the SQL Server) will only be called as a name of procedure + parameters, and this will minimize the network pressure. Bytes received from server – is the amount of data the SQL Server has sent to the client, measured in bytes. Depending on the number of rows and the datatypes involved, this number will vary. But still, think about the network load when you request data from SQL Server. Client processing time – is the amount of time spent in milliseconds between the first received response packet and the last received response packet by the client. Wait time on server replies – is the time in milliseconds between the last request packet which left the client and the first response packet which came back from the server to the client. Total execution time – is the sum of client processing time and wait time on server replies (the SQL Server internal processing time) Here is an illustration of the Client-server communication model which should help you understand the mutual waits in a client-server environment. Keep in mind that a query with a large ‘wait time on server replies’ means the server took a long time to produce the very first row. This is usual on queries that have operators that need the entire sub-query to evaluate before they proceed (for example, sort and top operators). However, a query with a very short ‘wait time on server replies’ means that the query was able to return the first row fast. However a long ‘client processing time’ does not necessarily imply the client spent a lot of time processing and the server was blocked waiting on the client. It can simply mean that the server continued to return rows from the result and this is how long it took until the very last row was returned. The bottom line is that developers and DBAs should work together and think carefully of the resource utilization in the client-server environment. From experience I can say that so far I have seen only cases when the application developers and the Database developers are on their own and do not ask questions about the other party’s world. I would recommend using the Client Statistics tool during new development to track the performance of the queries, and also to find a synchronous way of utilizing resources between the client – server – client. Here is another example: think about similar setup as above, but add another server to the game. Let’s say that we keep our media on a separate server, and together with the data from our SQL Server we need to display some images on the webpage requested by our user. No matter how simple or complicated the logic to get the images is, if the images are 500kb each our users will get the page slowly and they will still think that there is something wrong with our data. Anyway, I don’t mean to get carried away too far from SQL Server. Instead, what I would like to say is that DBAs should also be aware of ‘the big picture’. I wrote a blog post a while back on this topic, and if you are interested, you can read it here about the big picture. And finally, here are some guidelines for monitoring the network performance and improving it: Run a trace and outline all queries that return more than 1000 rows (in Profiler you can actually filter and sort the captured trace by number of returned rows). This is not a set number; it is more of a guideline. The general thought is that no application user can consume that many rows at once. Ask yourself and your fellow-developers: ‘why?’. Monitor your network counters in Perfmon: Network Interface:Output queue length, Redirector:Network errors/sec, TCPv4: Segments retransmitted/sec and so on. Make sure to establish a good friendship with your network administrator (buy them coffee, for example J ) and get into a conversation about the network settings. Have them explain to you how the network cards are setup – are they standalone, are they ‘teamed’, what are the settings – full duplex and so on. Find some time to read a bit about networking. In this short blog post I hope I have turned your attention to ‘the big picture’ and the fact that there are other factors affecting our SQL Server, aside from its internal workings. As a further reading I would still highly recommend the Wait Stats series on this blog, also I would recommend you have the coffee break conversation with your network admin as soon as possible. This guest post is written by Feodor Georgiev. Read all the post in the Wait Types and Queue series. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL

    Read the article

  • SQL SERVER – Guest Post – Jonathan Kehayias – Wait Type – Day 16 of 28

    - by pinaldave
    Jonathan Kehayias (Blog | Twitter) is a MCITP Database Administrator and Developer, who got started in SQL Server in 2004 as a database developer and report writer in the natural gas industry. After spending two and a half years working in TSQL, in late 2006, he transitioned to the role of SQL Database Administrator. His primary passion is performance tuning, where he frequently rewrites queries for better performance and performs in depth analysis of index implementation and usage. Jonathan blogs regularly on SQLBlog, and was a coauthor of Professional SQL Server 2008 Internals and Troubleshooting. On a personal note, I think Jonathan is extremely positive person. In every conversation with him I have found that he is always eager to help and encourage. Every time he finds something needs to be approved, he has contacted me without hesitation and guided me to improve, change and learn. During all the time, he has not lost his focus to help larger community. I am honored that he has accepted to provide his views on complex subject of Wait Types and Queues. Currently I am reading his series on Extended Events. Here is the guest blog post by Jonathan: SQL Server troubleshooting is all about correlating related pieces of information together to indentify where exactly the root cause of a problem lies. In my daily work as a DBA, I generally get phone calls like, “So and so application is slow, what’s wrong with the SQL Server.” One of the funny things about the letters DBA is that they go so well with Default Blame Acceptor, and I really wish that I knew exactly who the first person was that pointed that out to me, because it really fits at times. A lot of times when I get this call, the problem isn’t related to SQL Server at all, but every now and then in my initial quick checks, something pops up that makes me start looking at things further. The SQL Server is slow, we see a number of tasks waiting on ASYNC_IO_COMPLETION, IO_COMPLETION, or PAGEIOLATCH_* waits in sys.dm_exec_requests and sys.dm_exec_waiting_tasks. These are also some of the highest wait types in sys.dm_os_wait_stats for the server, so it would appear that we have a disk I/O bottleneck on the machine. A quick check of sys.dm_io_virtual_file_stats() and tempdb shows a high write stall rate, while our user databases show high read stall rates on the data files. A quick check of some performance counters and Page Life Expectancy on the server is bouncing up and down in the 50-150 range, the Free Page counter consistently hits zero, and the Free List Stalls/sec counter keeps jumping over 10, but Buffer Cache Hit Ratio is 98-99%. Where exactly is the problem? In this case, which happens to be based on a real scenario I faced a few years back, the problem may not be a disk bottleneck at all; it may very well be a memory pressure issue on the server. A quick check of the system spec’s and it is a dual duo core server with 8GB RAM running SQL Server 2005 SP1 x64 on Windows Server 2003 R2 x64. Max Server memory is configured at 6GB and we think that this should be enough to handle the workload; or is it? This is a unique scenario because there are a couple of things happening inside of this system, and they all relate to what the root cause of the performance problem is on the system. If we were to query sys.dm_exec_query_stats for the TOP 10 queries, by max_physical_reads, max_logical_reads, and max_worker_time, we may be able to find some queries that were using excessive I/O and possibly CPU against the system in their worst single execution. We can also CROSS APPLY to sys.dm_exec_sql_text() and see the statement text, and also CROSS APPLY sys.dm_exec_query_plan() to get the execution plan stored in cache. Ok, quick check, the plans are pretty big, I see some large index seeks, that estimate 2.8GB of data movement between operators, but everything looks like it is optimized the best it can be. Nothing really stands out in the code, and the indexing looks correct, and I should have enough memory to handle this in cache, so it must be a disk I/O problem right? Not exactly! If we were to look at how much memory the plan cache is taking by querying sys.dm_os_memory_clerks for the CACHESTORE_SQLCP and CACHESTORE_OBJCP clerks we might be surprised at what we find. In SQL Server 2005 RTM and SP1, the plan cache was allowed to take up to 75% of the memory under 8GB. I’ll give you a second to go back and read that again. Yes, you read it correctly, it says 75% of the memory under 8GB, but you don’t have to take my word for it, you can validate this by reading Changes in Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2. In this scenario the application uses an entirely adhoc workload against SQL Server and this leads to plan cache bloat, and up to 4.5GB of our 6GB of memory for SQL can be consumed by the plan cache in SQL Server 2005 SP1. This in turn reduces the size of the buffer cache to just 1.5GB, causing our 2.8GB of data movement in this expensive plan to cause complete flushing of the buffer cache, not just once initially, but then another time during the queries execution, resulting in excessive physical I/O from disk. Keep in mind that this is not the only query executing at the time this occurs. Remember the output of sys.dm_io_virtual_file_stats() showed high read stalls on the data files for our user databases versus higher write stalls for tempdb? The memory pressure is also forcing heavier use of tempdb to handle sorting and hashing in the environment as well. The real clue here is the Memory counters for the instance; Page Life Expectancy, Free List Pages, and Free List Stalls/sec. The fact that Page Life Expectancy is fluctuating between 50 and 150 constantly is a sign that the buffer cache is experiencing constant churn of data, once every minute to two and a half minutes. If you add to the Page Life Expectancy counter, the consistent bottoming out of Free List Pages along with Free List Stalls/sec consistently spiking over 10, and you have the perfect memory pressure scenario. All of sudden it may not be that our disk subsystem is the problem, but is instead an innocent bystander and victim. Side Note: The Page Life Expectancy counter dropping briefly and then returning to normal operating values intermittently is not necessarily a sign that the server is under memory pressure. The Books Online and a number of other references will tell you that this counter should remain on average above 300 which is the time in seconds a page will remain in cache before being flushed or aged out. This number, which equates to just five minutes, is incredibly low for modern systems and most published documents pre-date the predominance of 64 bit computing and easy availability to larger amounts of memory in SQL Servers. As food for thought, consider that my personal laptop has more memory in it than most SQL Servers did at the time those numbers were posted. I would argue that today, a system churning the buffer cache every five minutes is in need of some serious tuning or a hardware upgrade. Back to our problem and its investigation: There are two things really wrong with this server; first the plan cache is excessively consuming memory and bloated in size and we need to look at that and second we need to evaluate upgrading the memory to accommodate the workload being performed. In the case of the server I was working on there were a lot of single use plans found in sys.dm_exec_cached_plans (where usecounts=1). Single use plans waste space in the plan cache, especially when they are adhoc plans for statements that had concatenated filter criteria that is not likely to reoccur with any frequency.  SQL Server 2005 doesn’t natively have a way to evict a single plan from cache like SQL Server 2008 does, but MVP Kalen Delaney, showed a hack to evict a single plan by creating a plan guide for the statement and then dropping that plan guide in her blog post Geek City: Clearing a Single Plan from Cache. We could put that hack in place in a job to automate cleaning out all the single use plans periodically, minimizing the size of the plan cache, but a better solution would be to fix the application so that it uses proper parameterized calls to the database. You didn’t write the app, and you can’t change its design? Ok, well you could try to force parameterization to occur by creating and keeping plan guides in place, or we can try forcing parameterization at the database level by using ALTER DATABASE <dbname> SET PARAMETERIZATION FORCED and that might help. If neither of these help, we could periodically dump the plan cache for that database, as discussed as being a problem in Kalen’s blog post referenced above; not an ideal scenario. The other option is to increase the memory on the server to 16GB or 32GB, if the hardware allows it, which will increase the size of the plan cache as well as the buffer cache. In SQL Server 2005 SP1, on a system with 16GB of memory, if we set max server memory to 14GB the plan cache could use at most 9GB  [(8GB*.75)+(6GB*.5)=(6+3)=9GB], leaving 5GB for the buffer cache.  If we went to 32GB of memory and set max server memory to 28GB, the plan cache could use at most 16GB [(8*.75)+(20*.5)=(6+10)=16GB], leaving 12GB for the buffer cache. Thankfully we have SQL Server 2005 Service Pack 2, 3, and 4 these days which include the changes in plan cache sizing discussed in the Changes to Caching Behavior between SQL Server 2000, SQL Server 2005 RTM and SQL Server 2005 SP2 blog post. In real life, when I was troubleshooting this problem, I spent a week trying to chase down the cause of the disk I/O bottleneck with our Server Admin and SAN Admin, and there wasn’t much that could be done immediately there, so I finally asked if we could increase the memory on the server to 16GB, which did fix the problem. It wasn’t until I had this same problem occur on another system that I actually figured out how to really troubleshoot this down to the root cause.  I couldn’t believe the size of the plan cache on the server with 16GB of memory when I actually learned about this and went back to look at it. SQL Server is constantly telling a story to anyone that will listen. As the DBA, you have to sit back and listen to all that it’s telling you and then evaluate the big picture and how all the data you can gather from SQL about performance relate to each other. One of the greatest tools out there is actually a free in the form of Diagnostic Scripts for SQL Server 2005 and 2008, created by MVP Glenn Alan Berry. Glenn’s scripts collect a majority of the information that SQL has to offer for rapid troubleshooting of problems, and he includes a lot of notes about what the outputs of each individual query might be telling you. When I read Pinal’s blog post SQL SERVER – ASYNC_IO_COMPLETION – Wait Type – Day 11 of 28, I noticed that he referenced Checking Memory Related Performance Counters in his post, but there was no real explanation about why checking memory counters is so important when looking at an I/O related wait type. I thought I’d chat with him briefly on Google Talk/Twitter DM and point this out, and offer a couple of other points I noted, so that he could add the information to his blog post if he found it useful.  Instead he asked that I write a guest blog for this. I am honored to be a guest blogger, and to be able to share this kind of information with the community. The information contained in this blog post is a glimpse at how I do troubleshooting almost every day of the week in my own environment. SQL Server provides us with a lot of information about how it is running, and where it may be having problems, it is up to us to play detective and find out how all that information comes together to tell us what’s really the problem. This blog post is written by Jonathan Kehayias (Blog | Twitter). Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, PostADay, Readers Contribution, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • How to Reuse Your Old Wi-Fi Router as a Network Switch

    - by Jason Fitzpatrick
    Just because your old Wi-Fi router has been replaced by a newer model doesn’t mean it needs to gather dust in the closet. Read on as we show you how to take an old and underpowered Wi-Fi router and turn it into a respectable network switch (saving your $20 in the process). Image by mmgallan. Why Do I Want To Do This? Wi-Fi technology has changed significantly in the last ten years but Ethernet-based networking has changed very little. As such, a Wi-Fi router with 2006-era guts is lagging significantly behind current Wi-Fi router technology, but the Ethernet networking component of the device is just as useful as ever; aside from potentially being only 100Mbs instead of 1000Mbs capable (which for 99% of home applications is irrelevant) Ethernet is Ethernet. What does this matter to you, the consumer? It means that even though your old router doesn’t hack it for your Wi-Fi needs any longer the device is still a perfectly serviceable (and high quality) network switch. When do you need a network switch? Any time you want to share an Ethernet cable among multiple devices, you need a switch. For example, let’s say you have a single Ethernet wall jack behind your entertainment center. Unfortunately you have four devices that you want to link to your local network via hardline including your smart HDTV, DVR, Xbox, and a little Raspberry Pi running XBMC. Instead of spending $20-30 to purchase a brand new switch of comparable build quality to your old Wi-Fi router it makes financial sense (and is environmentally friendly) to invest five minutes of your time tweaking the settings on the old router to turn it from a Wi-Fi access point and routing tool into a network switch–perfect for dropping behind your entertainment center so that your DVR, Xbox, and media center computer can all share an Ethernet connection. What Do I Need? For this tutorial you’ll need a few things, all of which you likely have readily on hand or are free for download. To follow the basic portion of the tutorial, you’ll need the following: 1 Wi-Fi router with Ethernet ports 1 Computer with Ethernet jack 1 Ethernet cable For the advanced tutorial you’ll need all of those things, plus: 1 copy of DD-WRT firmware for your Wi-Fi router We’re conducting the experiment with a Linksys WRT54GL Wi-Fi router. The WRT54 series is one of the best selling Wi-Fi router series of all time and there’s a good chance a significant number of readers have one (or more) of them stuffed in an office closet. Even if you don’t have one of the WRT54 series routers, however, the principles we’re outlining here apply to all Wi-Fi routers; as long as your router administration panel allows the necessary changes you can follow right along with us. A quick note on the difference between the basic and advanced versions of this tutorial before we proceed. Your typical Wi-Fi router has 5 Ethernet ports on the back: 1 labeled “Internet”, “WAN”, or a variation thereof and intended to be connected to your DSL/Cable modem, and 4 labeled 1-4 intended to connect Ethernet devices like computers, printers, and game consoles directly to the Wi-Fi router. When you convert a Wi-Fi router to a switch, in most situations, you’ll lose two port as the “Internet” port cannot be used as a normal switch port and one of the switch ports becomes the input port for the Ethernet cable linking the switch to the main network. This means, referencing the diagram above, you’d lose the WAN port and LAN port 1, but retain LAN ports 2, 3, and 4 for use. If you only need to switch for 2-3 devices this may be satisfactory. However, for those of you that would prefer a more traditional switch setup where there is a dedicated WAN port and the rest of the ports are accessible, you’ll need to flash a third-party router firmware like the powerful DD-WRT onto your device. Doing so opens up the router to a greater degree of modification and allows you to assign the previously reserved WAN port to the switch, thus opening up LAN ports 1-4. Even if you don’t intend to use that extra port, DD-WRT offers you so many more options that it’s worth the extra few steps. Preparing Your Router for Life as a Switch Before we jump right in to shutting down the Wi-Fi functionality and repurposing your device as a network switch, there are a few important prep steps to attend to. First, you want to reset the router (if you just flashed a new firmware to your router, skip this step). Following the reset procedures for your particular router or go with what is known as the “Peacock Method” wherein you hold down the reset button for thirty seconds, unplug the router and wait (while still holding the reset button) for thirty seconds, and then plug it in while, again, continuing to hold down the rest button. Over the life of a router there are a variety of changes made, big and small, so it’s best to wipe them all back to the factory default before repurposing the router as a switch. Second, after resetting, we need to change the IP address of the device on the local network to an address which does not directly conflict with the new router. The typical default IP address for a home router is 192.168.1.1; if you ever need to get back into the administration panel of the router-turned-switch to check on things or make changes it will be a real hassle if the IP address of the device conflicts with the new home router. The simplest way to deal with this is to assign an address close to the actual router address but outside the range of addresses that your router will assign via the DHCP client; a good pick then is 192.168.1.2. Once the router is reset (or re-flashed) and has been assigned a new IP address, it’s time to configure it as a switch. Basic Router to Switch Configuration If you don’t want to (or need to) flash new firmware onto your device to open up that extra port, this is the section of the tutorial for you: we’ll cover how to take a stock router, our previously mentioned WRT54 series Linksys, and convert it to a switch. Hook the Wi-Fi router up to the network via one of the LAN ports (consider the WAN port as good as dead from this point forward, unless you start using the router in its traditional function again or later flash a more advanced firmware to the device, the port is officially retired at this point). Open the administration control panel via  web browser on a connected computer. Before we get started two things: first,  anything we don’t explicitly instruct you to change should be left in the default factory-reset setting as you find it, and two, change the settings in the order we list them as some settings can’t be changed after certain features are disabled. To start, let’s navigate to Setup ->Basic Setup. Here you need to change the following things: Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable Save with the “Save Settings” button and then navigate to Setup -> Advanced Routing: Operating Mode: Router This particular setting is very counterintuitive. The “Operating Mode” toggle tells the device whether or not it should enable the Network Address Translation (NAT)  feature. Because we’re turning a smart piece of networking hardware into a relatively dumb one, we don’t need this feature so we switch from Gateway mode (NAT on) to Router mode (NAT off). Our next stop is Wireless -> Basic Wireless Settings: Wireless SSID Broadcast: Disable Wireless Network Mode: Disabled After disabling the wireless we’re going to, again, do something counterintuitive. Navigate to Wireless -> Wireless Security and set the following parameters: Security Mode: WPA2 Personal WPA Algorithms: TKIP+AES WPA Shared Key: [select some random string of letters, numbers, and symbols like JF#d$di!Hdgio890] Now you may be asking yourself, why on Earth are we setting a rather secure Wi-Fi configuration on a Wi-Fi router we’re not going to use as a Wi-Fi node? On the off chance that something strange happens after, say, a power outage when your router-turned-switch cycles on and off a bunch of times and the Wi-Fi functionality is activated we don’t want to be running the Wi-Fi node wide open and granting unfettered access to your network. While the chances of this are next-to-nonexistent, it takes only a few seconds to apply the security measure so there’s little reason not to. Save your changes and navigate to Security ->Firewall. Uncheck everything but Filter Multicast Firewall Protect: Disable At this point you can save your changes again, review the changes you’ve made to ensure they all stuck, and then deploy your “new” switch wherever it is needed. Advanced Router to Switch Configuration For the advanced configuration, you’ll need a copy of DD-WRT installed on your router. Although doing so is an extra few steps, it gives you a lot more control over the process and liberates an extra port on the device. Hook the Wi-Fi router up to the network via one of the LAN ports (later you can switch the cable to the WAN port). Open the administration control panel via web browser on the connected computer. Navigate to the Setup -> Basic Setup tab to get started. In the Basic Setup tab, ensure the following settings are adjusted. The setting changes are not optional and are required to turn the Wi-Fi router into a switch. WAN Connection Type: Disabled Local IP Address: [different than the primary router, e.g. 192.168.1.2] Subnet Mask: [same as the primary router, e.g. 255.255.255.0] DHCP Server: Disable In addition to disabling the DHCP server, also uncheck all the DNSMasq boxes as the bottom of the DHCP sub-menu. If you want to activate the extra port (and why wouldn’t you), in the WAN port section: Assign WAN Port to Switch [X] At this point the router has become a switch and you have access to the WAN port so the LAN ports are all free. Since we’re already in the control panel, however, we might as well flip a few optional toggles that further lock down the switch and prevent something odd from happening. The optional settings are arranged via the menu you find them in. Remember to save your settings with the save button before moving onto a new tab. While still in the Setup -> Basic Setup menu, change the following: Gateway/Local DNS : [IP address of primary router, e.g. 192.168.1.1] NTP Client : Disable The next step is to turn off the radio completely (which not only kills the Wi-Fi but actually powers the physical radio chip off). Navigate to Wireless -> Advanced Settings -> Radio Time Restrictions: Radio Scheduling: Enable Select “Always Off” There’s no need to create a potential security problem by leaving the Wi-Fi radio on, the above toggle turns it completely off. Under Services -> Services: DNSMasq : Disable ttraff Daemon : Disable Under the Security -> Firewall tab, uncheck every box except “Filter Multicast”, as seen in the screenshot above, and then disable SPI Firewall. Once you’re done here save and move on to the Administration tab. Under Administration -> Management:  Info Site Password Protection : Enable Info Site MAC Masking : Disable CRON : Disable 802.1x : Disable Routing : Disable After this final round of tweaks, save and then apply your settings. Your router has now been, strategically, dumbed down enough to plod along as a very dependable little switch. Time to stuff it behind your desk or entertainment center and streamline your cabling.     

    Read the article

  • CloudBerry Online Backup 1.5 for Windows Home Server

    - by The Geek
    Overview CloudBerry Online Backup version 1.5 is a front end application for Amazon S3 storage for backing up your Windows Home Server data. It makes backing up your essential data to Amazon S3 an easy process in the event the disaster strikes. Installation You install the Cloudberry Addin as you do for any addins for Windows Home Server. On a PC on your network, browse to the shared folders on your server and open the Add-Ins folder and copy over WHS_CloudBerryOnlineBackupSetup_v1.5.0.81S3o.msi (link below), then close out of the folder. Next launch the Windows Home Server Console, click Settings, then Add-Ins. Click on the Available tab and click the Install button. It installs very quickly, and when you get the Installation Succeeded dialog click OK. You will lose connection through the Console, just click OK, then reconnect. After reconnecting, you’ll see CloudBerry Backup has been installed, and you can begin using it. You can setup a backup plan right away or find out what’s new with version 1.5. Amazon S3 Account If you don’t already have an Amazon S3 account, you’ll be prompted to create a new one. Click on the Create an account hyperlink, which takes you to the Amazon S3 page where you can sign up. After reviewing the functionality of Amazon S3, click on the Sign Up for Amazon S3 button. Enter in your contact information and accept the Amazon Web Services Customer Agreement. You’re then shown their pricing for storage plans. The amount of storage space you use will depend on your needs. It’s relatively cheap for smaller amounts of data. Just keep in mind the more data you store and download, the more S3 is going to cost. Note: Amazon S3 is introducing Reduced Redundancy Storage which will lower the cost of the data stored on S3. CloudBerry 1.5 will support this new feature. You can find out more about this new pricing structure. Note: Keep in mind that after you first sign up for an Amazon S3 account, it can take up to 24 hours to be authorized. In fact, you may want to sign up for the S3 account before installing the Add-In. After you sign up for your S3 Account, you’ll be given access credentials which you can enter in and create a Storage Bucket name. Features & Use CloudBerry is wizard driven, straight-forward and easy to use. Here we take a look at creating a backup plan. To begin, click on the Setup Backup Plan button to kick off the wizard. Select your backup mode based on the amount of features you want. In our example we’re going to select Advanced Mode as it offers more features than Simple Mode. Select your backup storage account or create a new one. You can select a default account by checking Use currently selected account as default. Now you can go through and select the files and folders you want to backup from your home server. Check the box Show physical drives to get more of a selection of files and folders. This also allows you to backup files from your data drive as well. It has full support for drive extenders so you can backup your shares as well. The cool thing about Cloudberry is it allows you to drill down specific files and folders unlike other WHS backup utilities. Next you can use advanced filters to specify files and/or folders to skip if you want. There are compression and encryption options as well. This will save storage space, bandwidth, and keep your data secure. Purge Options allow you to customize options for getting rid of older files. You can also select the option to delete files from the S3 service that have been deleted locally. Be careful with this option however, as you won’t be able to restore files if you delete them locally. You have some nice scheduling options from running backups manually, specific date and time, or recurring daily, weekly or monthly. Receive email notifications in all cases or when a backup fails. This is a good option so you know if things were successful or something failed, and you need to back it up manually. Email notifications… Give your plan a name… Then if the summary page looks good you can continue, or still go back at this point if something doesn’t look correct and needs adjusting. That’s it! You’re ready to go, and you have an option to start your first backup right away. After you’ve created a backup plan, you can go in and edit, delete, view history, or restore files. Restoring Files using CloudBerry To restore data from your backups kick off the Restore Wizard and select the backup to restore from. You can select the last backup, a specific point in time, or manually browse through the files. Browse through the directory and select the files you need to restore. Choose the destination to restore the files to. You can select from the original location, a specific location, to overwrite existing files, or set the location as the default for future restores. If the files are encrypted, enter in the correct passwords. If the summary looks good, click on Next to start the restore process. You’ll be shown a progress bar at the bottom of the screen while the files are restored. After the process has completed, close out of the Restore Wizard. In this example we restored a couple of music files to the desktop of Windows Home Server… But as shown above you can save them to the original location, other network locations, or WHS shared folders. This can make it a lot easier to keep track of files you’ve restored. You can also access different options for CloudBerry by clicking Settings in WHS Console then CloudBerry Backup. Here you can set up a new storage account, check for updates, app options, Diagnostics, and send feedback. Under Options there are several settings you can tweak to get the best experience for your WHS backups. CloudBerry Web Interface Another nice feature is the CloudBerry Web Interface so you can access your data from anywhere you have an Internet connection. To check it out in WHS Console, click on the Backup Web Interface link…you’ll probably want to bookmark the link in your favorite browser. Note: This feature is still in beta and at the time of this review, the Web Interface wasn’t up and running so we weren’t able to test it out. Performance The Cloudberry app works very well through the Windows Home Server Console. The amount of time it takes to backup or restore your data will depend on the speed of your Internet connection and size of the files. In our tests, backing up 1GB of data to the Amazon S3 account took around an hour, but we were running it on a DSL with limited upload speeds so your mileage will vary. Product Support In our experience, the team at CloudBerry offered great support in a timely manner when contacting them. You can fill out a help request through a form on their website and they also have a community forum. Conclusion We were very pleased with CloudBerry Online Backup for WHS. It’s wizard driven interface makes it extremely easy to use, and offers comprehensive backup choices for your Amazon S3 account. CloudBerry will only backup files that have been modified, so if files haven’t been changed, they won’t be backed up again.They offer a free 15 day trial and is $29.99 after that for a full license. Once you buy the app you own it, and charges to your S3 account will vary depending on the amount of data you upload. If you’re looking for an effective and easy to use front end application to backup your Windows Home Server data to your Amazon S3 account, CloudBerry is a recommended affordable choice. Download CloudBerry for Windows Home Server Sign Up For Amazon S3 Account Rating Installation: 9 Ease of Use: 8 Features: 8 Performance: 8 Product Support: 8 Similar Articles Productive Geek Tips Restore Files from Backups on Windows Home ServerGMedia Blog: Setting Up a Windows Home ServerBackup Windows Home Server Folders to an External Hard DriveBackup Your Windows Home Server Off-Site with Asus WebstorageRemove a Network Computer from Windows Home Server TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • Advantages of SQL Backup Pro

    - by Grant Fritchey
    Getting backups of your databases in place is a fundamental issue for protection of the business. Yes, I said business, not data, not databases, but business. Because of a lack of good, tested, backups, companies have gone completely out of business or suffered traumatic financial loss. That’s just a simple fact (outlined with a few examples here). So you want to get backups right. That’s a big part of why we make Red Gate SQL Backup Pro work the way it does. Yes, you could just use native backups, but you’ll be missing a few advantages that we provide over and above what you get out of the box from Microsoft. Let’s talk about them. Guidance If you’re a hard-core DBA with 20+ years of experience on every version of SQL Server and several other data platforms besides, you may already know what you need in order to get a set of tested backups in place. But, if you’re not, maybe a little help would be a good thing. To set up backups for your servers, we supply a wizard that will step you through the entire process. It will also act to guide you down good paths. For example, if your databases are in Full Recovery, you should set up transaction log backups to run on a regular basis. When you choose a transaction log backup from the Backup Type you’ll see that only those databases that are in Full Recovery will be listed: This makes it very easy to be sure you have a log backup set up for all the databases you should and none of the databases where you won’t be able to. There are other examples of guidance throughout the product. If you have the responsibility of managing backups but very little knowledge or time, we can help you out. Throughout the software you’ll notice little green question marks. You can see two in the screen above and more in each of the screens in other topics below this one. Clicking on these will open a window with additional information about the topic in question which should help to guide you through some of the tougher decisions you may have to make while setting up your backup jobs. Here’s an example: Backup Copies As a part of the wizard you can choose to make a copy of your backup on your network. This process runs as part of the Red Gate SQL Backup engine. It will copy your backup, after completing the backup so it doesn’t cause any additional blocking or resource use within the backup process, to the network location you define. Creating a copy acts as a mechanism of protection for your backups. You can then backup that copy or do other things with it, all without affecting the original backup file. This requires either an additional backup or additional scripting to get it done within the native Microsoft backup engine. Offsite Storage Red Gate offers you the ability to immediately copy your backup to the cloud as a further, off-site, protection of your backups. It’s a service we provide and expose through the Backup wizard. Your backup will complete first, just like with the network backup copy, then an asynchronous process will copy that backup to cloud storage. Again, this is built right into the wizard or even the command line calls to SQL Backup, so it’s part a single process within your system. With native backup you would need to write additional scripts, possibly outside of T-SQL, to make this happen. Before you can use this with your backups you’ll need to do a little setup, but it’s built right into the product to get this done. You’ll be directed to the web site for our hosted storage where you can set up an account. Compression If you have SQL Server 2008 Enterprise, or you’re on SQL Server 2008R2 or greater and you have a Standard or Enterprise license, then you have backup compression. It’s built right in and works well. But, if you need even more compression then you might want to consider Red Gate SQL Backup Pro. We offer four levels of compression within the product. This means you can get a little compression faster, or you can just sacrifice some CPU time and get even more compression. You decide. For just a simple example I backed up AdventureWorks2012 using both methods of compression. The resulting file from native was 53mb. Our file was 33mb. That’s a file that is smaller by 38%, not a small number when we start talking gigabytes. We even provide guidance here to help you determine which level of compression would be right for you and your system: So for this test, if you wanted maximum compression with minimum CPU use you’d probably want to go with Level 2 which gets you almost as much compression as Level 3 but will use fewer resources. And that compression is still better than the native one by 10%. Restore Testing Backups are vital. But, a backup is just a file until you restore it. How do you know that you can restore that backup? Of course, you’ll use CHECKSUM to validate that what was read from disk during the backup process is what gets written to the backup file. You’ll also use VERIFYONLY to check that the backup header and the checksums on the backup file are valid. But, this doesn’t do a complete test of the backup. The only complete test is a restore. So, what you really need is a process that tests your backups. This is something you’ll have to schedule separately from your backups, but we provide a couple of mechanisms to help you out here. First, when you create a backup schedule, all done through our wizard which gives you as much guidance as you get when running backups, you get the option of creating a reminder to create a job to test your restores. You can enable this or disable it as you choose when creating your scheduled backups. Once you’re ready to schedule test restores for your databases, we have a wizard for this as well. After you choose the databases and restores you want to test, all configurable for automation, you get to decide if you’re going to restore to a specified copy or to the original database: If you’re doing your tests on a new server (probably the best choice) you can just overwrite the original database if it’s there. If not, you may want to create a new database each time you test your restores. Another part of validating your backups is ensuring that they can pass consistency checks. So we have DBCC built right into the process. You can even decide how you want DBCC run, which error messages to include, limit or add to the checks being run. With this you could offload some DBCC checks from your production system so that you only run the physical checks on your production box, but run the full check on this backup. That makes backup testing not just a general safety process, but a performance enhancer as well: Finally, assuming the tests pass, you can delete the database, leave it in place, or delete it regardless of the tests passing. All this is automated and scheduled through the SQL Agent job on your servers. Running your databases through this process will ensure that you don’t just have backups, but that you have tested backups. Single Point of Management If you have more than one server to maintain, getting backups setup could be a tedious process. But, with Red Gate SQL Backup Pro you can connect to multiple servers and then manage all your databases and all your servers backups from a single location. You’ll be able to see what is scheduled, what has run successfully and what has failed, all from a single interface without having to connect to different servers. Log Shipping Wizard If you want to set up log shipping as part of a disaster recovery process, it can frequently be a pain to get configured correctly. We supply a wizard that will walk you through every step of the process including setting up alerts so you’ll know should your log shipping fail. Summary You want to get your backups right. As outlined above, Red Gate SQL Backup Pro will absolutely help you there. We supply a number of processes and functionalities above and beyond what you get with SQL Server native. Plus, with our guidance, hints and reminders, you will get your backups set up in a way that protects your business.

    Read the article

  • GuestPost: Unit Testing Entity Framework (v1) Dependent Code using TypeMock Isolator

    - by Eric Nelson
    Time for another guest post (check out others in the series), this time bringing together the world of mocking with the world of Entity Framework. A big thanks to Moses for agreeing to do this. Unit Testing Entity Framework Dependent Code using TypeMock Isolator by Muhammad Mosa Introduction Unit testing data access code in my opinion is a challenging thing. Let us consider unit tests and integration tests. In integration tests you are allowed to have environmental dependencies such as a physical database connection to insert, update, delete or retrieve your data. However when performing unit tests it is often much more efficient and productive to remove environmental dependencies. Instead you will need to fake these dependencies. Faking a database (also known as mocking) can be relatively straight forward but the version of Entity Framework released with .Net 3.5 SP1 has a number of implementation specifics which actually makes faking the existence of a database quite difficult. Faking Entity Framework As mentioned earlier, to effectively unit test you will need to fake/simulate Entity Framework calls to the database. There are many free open source mocking frameworks that can help you achieve this but it will require additional effort to overcome & workaround a number of limitations in those frameworks. Examples of these limitations include: Not able to fake calls to non virtual methods Not able to fake sealed classes Not able to fake LINQ to Entities queries (replace database calls with in-memory collection calls) There is a mocking framework which is flexible enough to handle limitations such as those above. The commercially available TypeMock Isolator can do the job for you with less code and ultimately more readable unit tests. I’m going to demonstrate tackling one of those limitations using MoQ as my mocking framework. Then I will tackle the same issue using TypeMock Isolator. Mocking Entity Framework with MoQ One basic need when faking Entity Framework is to fake the ObjectContext. This cannot be done by passing any connection string. You have to pass a correct Entity Framework connection string that specifies CSDL, SSDL and MSL locations along with a provider connection string. Assuming we are going to do that, we’ll explore another limitation. The limitation we are going to face now is related to not being able to fake calls to non-virtual/overridable members with MoQ. I have the following repository method that adds an EntityObject (instance of a Blog entity) to Blogs entity set in an ObjectContext. public override void Add(Blog blog) { if(BlogContext.Blogs.Any(b=>b.Name == blog.Name)) { throw new InvalidOperationException("Blog with same name already exists!"); } BlogContext.AddToBlogs(blog); } The method does a very simple check that the name of the new Blog entity instance doesn’t exist. This is done through the simple LINQ query above. If the blog doesn’t already exist it simply adds it to the current context to be saved when SaveChanges of the ObjectContext instance (e.g. BlogContext) is called. However, if a blog with the same name exits, and exception (InvalideOperationException) will be thrown. Let us now create a unit test for the Add method using MoQ. [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_Should_Throw_InvalidOperationException_When_Blog_With_Same_Name_Already_Exits() { //(1) We shouldn't depend on configuration when doing unit tests! But, //its a workaround to fake the ObjectContext string connectionString = ConfigurationManager .ConnectionStrings["MyBlogConnString"] .ConnectionString; //(2) Arrange: Fake ObjectContext var fakeContext = new Mock<MyBlogContext>(connectionString); //(3) Next Line will pass, as ObjectContext now can be faked with proper connection string var repo = new BlogRepository(fakeContext.Object); //(4) Create fake ObjectQuery<Blog>. Will be used to substitute MyBlogContext.Blogs property var fakeObjectQuery = new Mock<ObjectQuery<Blog>>("[Blogs]", fakeContext.Object); //(5) Arrange: Set Expectations //Next line will throw an exception by MoQ: //System.ArgumentException: Invalid setup on a non-overridable member fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); //Act repo.Add(new Blog { Name = "NewBlog" }); } This test method is checking to see if the correct exception ([ExpectedException(typeof(InvalidOperationException))]) is thrown when a developer attempts to Add a blog with a name that’s already exists. On (1) a connection string is initialized from configuration file. To retrieve the full connection string. On (2) a fake ObjectContext is being created. The ObjectContext here is MyBlogContext and its being created using this var fakeContext = new Mock<MyBlogContext>(connectionString); This way a fake context is being created using MoQ. On (3) a BlogRepository instance is created. BlogRepository has dependency on generate Entity Framework ObjectContext, MyObjectContext. And so the fake context is passed to the constructor. var repo = new BlogRepository(fakeContext.Object); On (4) a fake instance of ObjectQuery<Blog> is being created to use as a substitute to MyObjectContext.Blogs property as we will see in (5). On (5) setup an expectation for calling Blogs property of MyBlogContext and substitute the return result with the fake ObjectQuery<Blog> instance created on (4). When you run this test it will fail with MoQ throwing an exception because of this line: fakeContext.SetupGet(c=>c.Blogs).Returns(fakeObjectQuery.Object); This happens because the generate property MyBlogContext.Blogs is not virtual/overridable. And assuming it is virtual or you managed to make it virtual it will fail at the following line throwing the same exception: fakeObjectQuery.Setup(q => q.Any(b => b.Name == "NewBlog")).Returns(true); This time the test will fail because the Any extension method is not virtual/overridable. You won’t be able to replace ObjectQuery<Blog> with fake in memory collection to test your LINQ to Entities queries. Now lets see how replacing MoQ with TypeMock Isolator can help. Mocking Entity Framework with TypeMock Isolator The following is the same test method we had above for MoQ but this time implemented using TypeMock Isolator: [TestMethod] [ExpectedException(typeof(InvalidOperationException))] public void Add_New_Blog_That_Already_Exists_Should_Throw_InvalidOperationException() { //(1) Create fake in memory collection of blogs var fakeInMemoryBlogs = new List<Blog> {new Blog {Name = "FakeBlog"}}; //(2) create fake context var fakeContext = Isolate.Fake.Instance<MyBlogContext>(); //(3) Setup expected call to MyBlogContext.Blogs property through the fake context Isolate.WhenCalled(() => fakeContext.Blogs) .WillReturnCollectionValuesOf(fakeInMemoryBlogs.AsQueryable()); //(4) Create new blog with a name that already exits in the fake in memory collection in (1) var blog = new Blog {Name = "FakeBlog"}; //(5) Instantiate instance of BlogRepository (Class under test) var repo = new BlogRepository(fakeContext); //(6) Acting by adding the newly created blog () repo.Add(blog); } When running the above test method it will pass as the Add method of BlogRepository is going to throw an InvalidOperationException which is the expected behaviour. Nothing prevents us from faking out the database interaction! Even faking ObjectContext  at (2) didn’t require a connection string. On (3) Isolator sets up a faking result for MyBlogContext.Blogs when its being called through the fake instance fakeContext created on (2). The faking result is just an in-memory collection declared an initialized on (1). Finally at (6) action we call the Add method of BlogRepository passing a new Blog instance that has a name that’s already exists in the fake in-memory collection which we set up at (1). As expected the test will pass because it will throw the expected exception defined on top of the test method - InvalidOperationException. TypeMock Isolator succeeded in faking Entity Framework with ease. Conclusion We explored how to write a simple unit test using TypeMock Isolator for code which is using Entity Framework. We also explored a few of the limitations of other mocking frameworks which TypeMock is successfully able to handle. There are workarounds that you can use to overcome limitations when using MoQ or Rhino Mock, however the workarounds will require you to write more code and your tests will likely be more complex. For a comparison between different mocking frameworks take a look at this document produced by TypeMock. You might also want to check out this open source project to compare mocking frameworks. I hope you enjoyed this post Muhammad Mosa http://mosesofegypt.net/ http://twitter.com/mosessaur Screencast of unit testing Entity Framework Related Links GuestPost: Introduction to Mocking GuesPost: Typemock Isolator – Much more than an Isolation framework

    Read the article

  • Mr Flibble: As Seen Through a Lens, Darkly

    - by Phil Factor
    One of the rewarding things about getting involved with Simple-Talk has been in meeting and working with some pretty daunting talents. I’d like to say that Dom Reed’s talents are at the end of the visible spectrum, but then there is Richard, who pops up on national radio occasionally, presenting intellectual programs, Andrew, master of the ukulele, with his pioneering local history work, and Tony with marathon running and his past as a university lecturer. However, Dom, who is Red Gate’s head of creative design and who did the preliminary design work for Simple-Talk, has taken the art photography to an extreme that was impossible before Photoshop. He’s not the first person to take a photograph of himself every day for two years, but he is definitely the first to weave the results into a frightening narrative that veers from comedy to pathos, using all the arts of Photoshop to create a fictional character, Mr Flibble.   Have a look at some of the Flickr pages. Uncle Spike The B-Men – Woolverine The 2011 BoyZ iN Sink reunion tour turned out to be their last Error 404 – Flibble not found Mr Flibble is not a normal type of alter-ego. We generally prefer to choose bronze age warriors of impossibly magnificent physique and stamina; superheroes who bestride the world, scorning the forces of evil and anarchy in a series noble and righteous quests. Not so Dom, whose Mr Flibble is vulnerable, and laid low by an addiction to toxic substances. His work has gained an international cult following and is used as course material by several courses in photography. Although his work was for a while ignored by the more conventional world of ‘art’ photography they became famous through the internet. His photos have received well over a million views on Flickr. It was definitely time to turn this work into a book, because the whole sequence of images has its maximum effect when seen in sequence. He has a Kickstarter project page, one of the first following the recent UK launch of the crowdfunding platform. The publication of the book should be a major event and the £45 I shall divvy up will be one of the securest investments I shall ever make. The local news in Cambridge picked up on the project and I can quote from the report by the excellent Cabume website , the source of Tech news from the ‘Cambridge cluster’ Put really simply Mr Flibble likes to dress up and take pictures of himself. One of the benefits of a split personality, however is that Mr Flibble is supported in his endeavour by Reed’s top notch photography skills, supreme mastery of Photoshop and unflinching dedication to the cause. The duo have collaborated to take a picture every day for the past 730-plus days. It is not a big surprise that neither Mr Flibble nor Reed watches any TV: In addition to his full-time role at Cambridge software house,Red Gate Software as head of creativity and the two to five hours a day he spends taking the Mr Flibble shots, Reed also helps organise the . And now Reed is using Kickstarter to see if the world is ready for a Mr Flibble coffee table book. Judging by the early response it is. At the time of writing, just a few days after it went live, ‘I Drink Lead Paint: An absurd photography book by Mr Flibble’ had raised £1,545 of the £10,000 target it needs to raise by the Friday 30 November deadline from 37 backers. Following the standard Kickstarter template, Reed is offering a series of rewards based on the amount pledged, ranging from a Mr Flibble desktop wallpaper for pledges of £5 or more to a signed copy of the book for pledges of £45 or more, right up to a starring role in the book for £1,500. Mr Flibble is unquestionably one of the more deranged Kickstarter hopefuls, but don’t think for a second that he doesn’t have a firm grasp on the challenges he faces on the road to immortalisation on 150 gsm stock. Under the section ‘risks and challenges’ on his Kickstarter page his statement begins: “An angry horde of telepathic iguanas discover the world’s last remaining stock of vintage lead paint and hold me to ransom. Gosh how I love to guzzle lead paint. Anyway… faced with such brazen bravado, I cower at the thought of taking on their combined might and die a sad and lonely Flibble deprived of my one and only true liquid love.” At which point, Reed manages to wrestle away the keyboard, giving him the opportunity to present slightly more cogent analysis of the obstacles the project must still overcome. We asked Reed a few questions about Mr Flibble’s Kickstarter adventure and felt that his responses were worth publishing in full: Firstly, how did you manage it – holding down a full time job and also conceiving and executing these ideas on a daily basis? I employed a small team of ferocious gerbils to feed me ideas on a daily basis. Whilst most of their ideas were incomprehensibly rubbish and usually revolved around food, just occasionally they’d give me an idea like my B-Men series. As a backup plan though, I found that the best way to generate ideas was to actually start taking photos. If I were to stand in front of the camera, pull a silly face, place a vegetable on my head or something else equally stupid, the resulting photo of that would typically spark an idea when I came to look at it. Sitting around idly trying to think of an idea was doomed to result in no ideas. I admit that I really struggled with time. I’m proud that I never missed a day, but it was definitely hard when you were late from work, tired or doing something socially on the same day. I don’t watch TV, which I guess really helps, because I’d frequently be spending 2-5 hours taking and processing the photos every day. Are there any overlaps between software development and creative thinking? Software is an inherently creative business and the speed that it moves ensures you always have to find solutions to new things. Everyone in the team needs to be a problem solver. Has it helped me specifically with my photography? Probably. Working within teams that continually need to figure out new stuff keeps the brain feisty I suppose, and I guess I’m continually exposed to a lot of possible sources of inspiration. How specifically will this Kickstarter project allow you to test the commercial appeal of your work and do you plan to get the book into shops? It’s taken a while to be confident saying it, but I know that people like the work that I do. I’ve had well over a million views of my pictures, many humbling comments and I know I’ve garnered some loyal fans out there who anticipate my next photo. For me, this Kickstarter is about seeing if there’s worth to my work beyond just making people smile. In an online world where there’s an abundance of freely available content, can you hope to receive anything from what you do, or would people just move onto the next piece of content if you happen to ask for some support? A book has been the single-most requested thing that people have asked me to produce and it’s something that I feel would showcase my work well. It’s just hard to convince people in the publishing industry just now to take any kind of risk – they’ve been hit hard. If I can show that people would like my work enough to buy a book, then it sends a pretty clear picture that publishers might hear, or it gives me the confidence enough to invest in myself a bit more – hard to do when you’re riddled with self-doubt! I’d love to see my work in the shops, yes. I could see it being the thing that someone flips through idly as they’re Christmas shopping and recognizing that it’d be just the perfect gift for their difficult to buy for friend or relative. That said, working in the software industry means I’m clearly aware of how I could use technology to distribute my work, but I can’t deny that there’s something very appealing to having a physical thing to hold in your hands. If the project is successful is there a chance that it could become a full-time job? At the moment that seems like a distant dream, as should this be successful, there are many more steps I’d need to take to reach any kind of business viability. Kickstarter seems exactly that – a way for people to help kick start me into something that could take off. If people like my work and want me to succeed with it, then taking a look at my Kickstarter page (and hopefully pledging a bit of support) would make my elbows blush considerably. So there is is. An opportunity to open the wallet just a bit to ensure that one of the more unusual talents sees the light in the format it deserves.  

    Read the article

  • Can't add Fedora 14 to grub.

    - by Dananjaya
    Today I installed Fedora 14 in a different partition in the same hard drive as Ubuntu. At the Fedora 14 installation, I chose not to install Boot-loader in the MBR, and instead chose to install it in the Fedora partition itself, which is according to my HD layout /sda3. After the Fedora 14 installation I booted in to Ubuntu and ran sudo update-grub but 'grub.cfg' fails to add Fedora 14 in to the OS list. Here is the output of boot-info script. Boot Info Script 0.60 from 17 May 2011 ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos1)/boot/grub on this drive. sda1: __________________________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 11.04 Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img sda2: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: sda5: __________________________________________________________________________ File system: swap Boot sector type: - Boot sector info: sda3: __________________________________________________________________________ File system: ext4 Boot sector type: Grub Legacy Boot sector info: Grub Legacy (v0.97) is installed in the boot sector of sda3 and looks at sector 49897340 on boot drive #1 for the stage2 file. A stage2 file is at this location on /dev/sda. Stage2 looks on partition #3 for /grub/grub.conf. Operating System: Boot files: /grub/menu.lst /grub/grub.conf sda4: __________________________________________________________________________ File system: LVM2_member Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Disk /dev/sda: 40.0 GB, 40020664320 bytes 255 heads, 63 sectors/track, 4865 cylinders, total 78165360 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 2,048 49,865,759 49,863,712 83 Linux /dev/sda2 74,866,686 78,163,967 3,297,282 5 Extended /dev/sda5 74,866,688 78,163,967 3,297,280 82 Linux swap / Solaris /dev/sda3 49,866,752 50,890,751 1,024,000 83 Linux /dev/sda4 50,890,752 74,864,639 23,973,888 8e Linux LVM "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/sda1 03e2a8da-171f-49e9-b24d-434e66cd1140 ext4 /dev/sda3 dea81d77-a375-4d0e-954e-1829f6b91f10 ext4 /dev/sda4 mzVoj0-GHJu-DJr4-0G2Y-SzZ0-LTfW-F01yf9 LVM2_member /dev/sda5 3e89ba8e-7754-4ee4-aca1-e2a82bffb7a7 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sda1 / ext4 (rw,errors=remount-ro,user_xattr,commit=0) =========================== sda1/boot/grub/grub.cfg: =========================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="2" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=1024x768 load_video insmod gfxterm fi terminal_output gfxterm insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 2.6.38-8-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux /boot/vmlinuz-2.6.38-8-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.38-8-generic } menuentry 'Ubuntu, with Linux 2.6.38-8-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 echo 'Loading Linux 2.6.38-8-generic ...' linux /boot/vmlinuz-2.6.38-8-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.38-8-generic } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 2.6.35-28-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux /boot/vmlinuz-2.6.35-28-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.35-28-generic } menuentry 'Ubuntu, with Linux 2.6.35-28-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 echo 'Loading Linux 2.6.35-28-generic ...' linux /boot/vmlinuz-2.6.35-28-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.35-28-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux /boot/vmlinuz-2.6.32-21-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro quiet splash vt.handoff=7 initrd /boot/initrd.img-2.6.32-21-generic } menuentry 'Ubuntu, with Linux 2.6.32-21-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail set gfxpayload=$linux_gfx_mode insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 echo 'Loading Linux 2.6.32-21-generic ...' linux /boot/vmlinuz-2.6.32-21-generic root=UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 ro single echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-2.6.32-21-generic } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/sda,msdos1)' search --no-floppy --fs-uuid --set=root 03e2a8da-171f-49e9-b24d-434e66cd1140 linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober ### if [ "x${timeout}" != "x-1" ]; then if keystatus; then if keystatus --shift; then set timeout=-1 else set timeout=0 fi else if sleep --interruptible 3 ; then set timeout=0 fi fi fi ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- =============================== sda1/etc/fstab: ================================ -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation # Commented out by Dropbox # UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=3e89ba8e-7754-4ee4-aca1-e2a82bffb7a7 none swap sw 0 0 UUID=03e2a8da-171f-49e9-b24d-434e66cd1140 / ext4 errors=remount-ro,user_xattr 0 1 -------------------------------------------------------------------------------- =================== sda1: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 0.065803528 = 0.070656000 boot/grub/core.img 1 21.263332367 = 22.831329280 boot/grub/grub.cfg 1 0.771381378 = 0.828264448 boot/initrd.img-2.6.31-wl 1 2.054199219 = 2.205679616 boot/initrd.img-2.6.32-21-generic 3 2.893260956 = 3.106615296 boot/initrd.img-2.6.35-28-generic 2 6.833232880 = 7.337127936 boot/initrd.img-2.6.38-8-generic 2 1.772453308 = 1.903157248 boot/vmlinuz-2.6.32-21-generic 2 2.068012238 = 2.220511232 boot/vmlinuz-2.6.35-28-generic 1 5.532531738 = 5.940510720 boot/vmlinuz-2.6.38-8-generic 1 6.833232880 = 7.337127936 initrd.img 2 2.893260956 = 3.106615296 initrd.img.old 2 5.532531738 = 5.940510720 vmlinuz 1 2.068012238 = 2.220511232 vmlinuz.old 1 ============================= sda3/grub/grub.conf: ============================= -------------------------------------------------------------------------------- # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd0,2) # kernel /vmlinuz-version ro root=/dev/mapper/VolGroup-lv_root # initrd /initrd-[generic-]version.img #boot=/dev/sda3 default=0 timeout=0 splashimage=(hd0,2)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.35.6-45.fc14.i686) root (hd0,2) kernel /vmlinuz-2.6.35.6-45.fc14.i686 ro root=/dev/mapper/VolGroup-lv_root rd_LVM_LV=VolGroup/lv_root rd_LVM_LV=VolGroup/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us rhgb quiet initrd /initramfs-2.6.35.6-45.fc14.i686.img -------------------------------------------------------------------------------- =================== sda3: Location of files loaded by Grub: ==================== GiB - GB File Fragment(s) 23.792903900 = 25.547436032 grub/grub.conf 1 23.792903900 = 25.547436032 grub/menu.lst 1 23.793020248 = 25.547560960 grub/stage2 1 23.817364693 = 25.573700608 initramfs-2.6.35.6-45.fc14.i686.img 2 23.787566185 = 25.541704704 initrd-plymouth.img 1 23.791228294 = 25.545636864 vmlinuz-2.6.35.6-45.fc14.i686 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda2 00000000 81 71 62 ff a1 94 89 ff 4d 43 3a ff fa f2 ec ff |.qb.....MC:.....| 00000010 fb f6 f1 ff fc f8 f4 ff fc f8 f4 ff fc f8 f4 ff |................| 00000020 5d 56 50 ff a1 94 89 ff 81 70 62 ff 81 70 62 ff |]VP......pb..pb.| 00000030 81 70 62 ff 81 70 62 ff 81 70 62 ff a1 94 89 ff |.pb..pb..pb.....| 00000040 4d 43 3a ff fa f2 ec ff fb f6 f1 ff fc f8 f4 ff |MC:.............| 00000050 fc f8 f4 ff fc f8 f4 ff 5d 56 50 ff a1 94 89 ff |........]VP.....| 00000060 81 70 62 ff 81 70 62 ff 81 70 62 ff 81 70 62 ff |.pb..pb..pb..pb.| 00000070 81 70 62 ff a1 94 89 ff 4d 43 3a ff fa f2 ec ff |.pb.....MC:.....| 00000080 fb f6 f1 ff fc f8 f4 ff fc f8 f4 ff fc f8 f4 ff |................| 00000090 5d 56 50 ff a0 93 89 ff 80 6f 61 ff 80 6f 61 ff |]VP......oa..oa.| 000000a0 80 6f 61 ff 80 6f 61 ff 80 6f 61 ff a0 93 89 ff |.oa..oa..oa.....| 000000b0 4d 43 3a ff fa f2 ed ff fb f6 f2 ff fc f8 f5 ff |MC:.............| 000000c0 fc f8 f5 ff fc f8 f5 ff 5d 56 50 ff 9f 93 88 ff |........]VP.....| 000000d0 7f 6f 60 ff 7f 6f 60 ff 7f 6f 60 ff 7f 6f 60 ff |.o`..o`..o`..o`.| 000000e0 7f 6f 60 ff 9f 93 88 ff 4d 43 3a ff fa f2 ed ff |.o`.....MC:.....| 000000f0 fb f6 f2 ff fc f8 f5 ff fc f8 f5 ff fc f8 f5 ff |................| 00000100 5d 56 50 ff 9f 93 88 ff 7f 6f 60 ff 7f 6f 60 ff |]VP......o`..o`.| 00000110 7f 6f 60 ff 7f 6f 60 ff 7f 6f 60 ff 9f 93 88 ff |.o`..o`..o`.....| 00000120 4d 43 3a ff fa f2 ed ff fb f6 f2 ff fc f8 f5 ff |MC:.............| 00000130 fc f8 f5 ff fc f8 f5 ff 5d 56 50 ff 9e 92 88 ff |........]VP.....| 00000140 7e 6e 60 ff 7e 6e 60 ff 7e 6e 60 ff 7e 6e 60 ff |~n`.~n`.~n`.~n`.| 00000150 7e 6e 60 ff 9e 92 88 ff 4d 43 3a ff fa f2 ed ff |~n`.....MC:.....| 00000160 fb f6 f2 ff fc f8 f5 ff fc f8 f5 ff fc f8 f5 ff |................| 00000170 5d 56 50 ff 9e 92 88 ff 7d 6d 5f ff 7d 6d 5f ff |]VP.....}m_.}m_.| 00000180 7d 6d 5f ff 7d 6d 5f ff 7d 6d 5f ff 9e 92 88 ff |}m_.}m_.}m_.....| 00000190 4d 43 3a ff fa f2 ed ff fb f6 f2 ff fc f8 f5 ff |MC:.............| 000001a0 fc f8 f5 ff fc f8 f5 ff 5d 56 50 ff 9e 92 88 ff |........]VP.....| 000001b0 7d 6d 5f ff 7d 6d 5f ff 7d 6d 5f ff 7d 6d 00 fe |}m_.}m_.}m_.}m..| 000001c0 ff ff 82 fe ff ff 02 00 00 00 00 50 32 00 00 00 |...........P2...| 000001d0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 000001f0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 55 aa |..............U.| 00000200 =============================== StdErr Messages: =============================== unlzma: Decoder error According to this Fedora 14 is visible in sda3. Does anybody know a way to add Fedora 14 to grub.cfg of Ubuntu so I can choose which OS to boot? Thanks in advance.

    Read the article

  • CodePlex Daily Summary for Tuesday, June 15, 2010

    CodePlex Daily Summary for Tuesday, June 15, 2010New ProjectsBackup on Build: Backup critical files on each Visual Studio build.CDN Support for EPiServer CMS: This module adds CDN support for EPiServer CMS by modifying outgoing links.Custom WCF Bindings: This project contains some custom WCF LOB SDK bindings I have created including a SalesForce one. I will blog on updates as they occur. Please se...DocIcon for SharePoint 2010: DocIcon for SharePoint re-enables links from document icons in SharePoint 2010. This feature was in previous versions of SharePoint, but was remove...EEG Peak Detection: EEG Peak Detectionemployeemanagement1: employeemanagement1Enables map services on top of existing map providers like Google Maps: Services include Map visualization services, Map decoration services, Spot registration services and Spot naming services.fbprivacy: Tool to assess your Facebook Privacy SettingsfMRI SVM Toolbox: fMRI SVM ToolboxGCMS – using .Net for human CMS: GCMS makes only what you need to do with a CMS and nothing more and it makes it with .NETqjblog: My First Blog.Send2Sharepoint: Office(Word,Excel,Outlook) and windows explorer addin to upload documents to sharepoint document library. SharePoint Find and Replace: SharePoint Find & Replace allows you to replace a specific string within a site collection with a different value. For example, when you change a l...SharePoint Management Studio: This project developed on Visula Studio 2008 and c# language. The main aim is manage your SharePoint 2007 FARM.SharePoint PageController: A SharePoint solution which provides an extensible framework to perform actions on a per-page basis in SharePoint. OOTB functionality allows for f...SilverNotePad: Simple notepad built using MVVM patern.SolidWorks Addin Development: The SolidWorks Addin Development project is dedicated to helping developers and non-developers with creating fully functional addins.Sunlit World Scheme: Sunlit World Scheme is a nearly R4RS-compliant Scheme implementation that supports threading, TCP, UDP, cryptography, and simple graphics and windo...TimeBend: Time tracking gone wild.TinyCMS: Jednostavan CMS s mogućnosti unosa vijesti, linkova i natječaja. CSS je napravljen tek toliko... Aplikacija izrađena za dev4Fun natjecanje.Ujimanet Android: text categorization tool for androidVisual Storm Engine: Visual Storm es un motor para probar nuevas tecnologias orientadas a la creacion de video juegos. Por ahora solo soporta Windows Vista/7 y usa Dire...New ReleasesAjax ASP.Net Forum: developer.insecla.com-forum_v0.1.4: *VERSION: 0.1.4* FEATURES ADDED Rating Threads (Through AjaxCTK (included Ms .DLL in the BIN folder)) Empowered Within AJAX Custom star ima...AlphaGet: Alpha 3: Important: from this release WinGet changes its name to AlphaGet in order to identify it better and make it search-engine friendly. New Features N...Backup on Build: Backup on Build v1.0.0 Initial Release: Initial Release version 1.0.0Boleto.Net: BoletoNet: Última versão estável da BoletoNet.dll. O código fonte dessa versão pode ser encontrado em http://boletonet.codeplex.com/SourceControl/list/change...CDN Support for EPiServer CMS: CDN Support v1: See links on start page for information on how to use and install this module.Chargify.NET: Chargify.NET v0.750: Adding support for creating freemium subscription plans Adding preliminary JSON support Adding ISO 3166-1 Alpha 2 data embedded in the library ...Community Forums NNTP bridge: Community Forums NNTP Bridge V38: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has ad...ContainerOne - C# application server: V0.1.3.0: New minor release containing: Infrastructure - core service An installer of a windows service which provides the following: Service registry Even...DocIcon for SharePoint 2010: wwEsp.DocIcon Deployment Package Release 1.0.0: This package installs the DocIcon feature on a SharePoint 2010 server farm. The solution is deployed as a Farm-level feature that can be enabled or...Ethical Hacking ASP.NET: Version 1.2.0.0: For the complete list of changes, new features and fixes in the new version, please view the Version History page. Read more about the available te...Folder Bookmarks: Folder Bookmarks 1.6.3: The latest version of Folder Bookmarks (1.6.3), with new features and Mini-Menu UI Changes (1.4). Once you have extracted the file, do not delete ...Hades: Projet Hadès - Official Demo - Version 0.1.1 Beta: Second release correcting some bugs... ---------------------------------------------------------------------------- - Projet Hadès - Official Demo...KooBoo Image Gallery: RC 1: This new Version has this features 1) Refactoring to change the mispelled word galery to gallery 2) Change to use the plugin in the same page of ...LibWowArmory: LibWowArmory 0.3 beta: LibWowArmory 0.3 betaThis release of the LibWowArmory source code matches the WoW Armory as of version 3.3.3. Changes since version 0.2.3:Solution...MailChimp4Umbraco: 0.90 stable: Can be used in productionMapWindow6: MapWindow 6.0 June 14: This version adds the WebMercator projection and fixes a bug that was causing some perfect spheres to be created as oblate WGS1984 spheroids.MDownloader: MDownloader-0.15.18.59782: Supported FileServe. Supported SharingMatrix. Fixed minor bugs.MGM - MyGroupManager: MyGroupManager v0.1.5 - Alpha: At this point the application appears feature complete and works pretty well. The code still needs some tweaks (error handling), and a general look...MvcPager: MvcPager 1.4: MvcPager 1.4 source codes and demo projects MvcPager 1.4版源代码及示例文件Nito.LINQ: Beta (v0.6): Rx version The "with Rx" versions of Nito.LINQ are built against Rx 1.0.2563.0, released 2010-06-09. Supported Platforms .NET 4.0 Client Profile, ...open gaze and mouse analyzer: Ogama 3.3: This release was published on 14.06.2010 and is a bugfix release. For the list of changes please visit http://www.ogama.net. Only use this installe...patterns & practices: Prism: Prism 4.0 Drop 2: Prism 4.0 Drop 2 Welcome to the second drop of Prism 4.0 (formally known as the Composite Application Guidance for WPF and Silverlight). This drop ...Prism Software Factory Light: 0.5 Beta: 4ward Prism Software Factory Light - 0.5 Beta releaseThis is the first public beta release of the 4ward Prism Software Factory Light that allows to...PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT--3.3: Over the last several months, my primary research effort has been directed at producing strictly portable development methods between C and C# . T...qjblog: v1.source: v1.sourceqjblog: v1blog: V1 BlogQuick Performance Monitor: Version 1.4.2: Added 'Move to new window' functionality.Refix - .NET dependency management: Refix v0.1.0.90 ALPHA: Added console tree-style visualisation of solution dependencies, as well as some bug fixes. This version should work out of the box with the demons...SEMICO Framework: Version Stable 1.0.0.3: Version Stable 1.0.0.3SharePoint Find and Replace: 1.0.16: Version: 1.0.16 This release is the first stable release of this project, including the Microsoft public license agreement. Fixes: Added about dia...SharePoint Management Studio: v1: v1SharePoint PageController: SharePoint PageController: For SharePoint 2010 and 2007 running on IIS 7Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+06: Sw. Is Hw. Lib. 3.0.0.x+06SolidWorks Addin Development: GenericAddinFramework-06.14.2010: R1.SourceGrid: SourceGrid 4.30: Sources are here Note that SourceGrid sources are not hosted on CodePlex. The sources are hosted on bitbucket.org Main Changes Improved hidden ...SSIS Expression Editor & Tester: Expression Editor and Tester v1.0.2.0: Corrected release of expression editor tool, no changes to control. Download and extract the files to get started, no install required. Changes Co...Sunlit World Scheme: Sunlit World Scheme - 20100614 - source and binary: This is the result of building the current source code in Debug mode. The source code is included.TinyCMS: TinyCMS: Source kodVCC: Latest build, v2.1.30614.0: Automatic drop of latest buildVianaNET - Videoanalysis for physical motion: VianaNET 1.2 - beta: This is the VianaNET beta release with some bug fixes. Would like to have some comments on it. Regards, AdrianWorkLogger: Worklogger Beta 1: Simple work logger for Windows in WPFMost Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryPHPExcelMicrosoft SQL Server Community & SamplesASP.NETMost Active Projectspatterns & practices – Enterprise LibraryRhyduino - Arduino and Managed CodeCassandraemonCommunity Forums NNTP bridgedotSpatialjQuery Library for SharePoint Web ServicesBlogEngine.NETLightweight Fluent WorkflowNB_Store - Free DotNetNuke Ecommerce Catalog ModuleUmbraco CMS

    Read the article

  • CodePlex Daily Summary for Saturday, April 10, 2010

    CodePlex Daily Summary for Saturday, April 10, 2010New ProjectsAlan Platform: Платформа, позволяющая создавать окружение для искусственного интеллекта.Eksploracja test: testowy projekt gry eksploracjaGeoJSON.NET: GeoJSON .NETIhan sama... vaihdan myöhemmin: Testaillaan...mailfish: pop mailMiBiblioteca: Aplicación muy simple para realizar búsquedas en un listado de libros en una aplicación de escritorio y en una móvil.ServiceManagementConsole: The Service Management Console is a data-driven web app which I'm setting up to help me learn a bit about ASP.NET MVC, Visual Studio 2010, TFS and ...SimpleGeo.NET: SimpleGeo .NET clientSimplexEngine: a 3D game development framework base on Microsoft XNA. it contains game level editor, 3d model exporters and some other tools which is helpful to c...sosql: A standalone osql clone designed to replicate the functionality of SQL Server's osql. The current release of SoSql supports the basic switches tha...SpearHead: A basic app thingspikie: spikie's codeTwitter Directory: A twitter directory for your organization. Allows users to list their twitter username and any other information you want. The directory is searcha...Weather forecast for handheld internet capable device: Small PHP-project. Display wind speed and temperature for one preconfigured location. Using XML weather data aquired from the popular weather forec...New ReleasesAStar.net: AStar.net 1.12 downloads: AStar.net 1.12 Version detailsChanged framework version requirement to 2.0 to increase compatibility, removed the icon from the dll to decrease spa...BackUpAnyWhere: Milestone 0 - Documentation: First milestone for us is the documentation period when we are working on the documents starting from project presentationBB Scheduler - BroadBand Scheduler: BroadBand Scheduler v3.0: - Broadband service has some of the cheap and best monthly plans for the users all over the nation. And some of the plans include unlimited night d...BizTalk Software Factory: BizTalk Software Factory v1.7: Version 1.7 for BizTalk Server 2006 (R2). This is a service release for the BSF to support updated versions of tools. Most important is the suppor...Braintree Client Library: Braintree-1.2.0: Includes support for Subscription Search and minor bugfixes.Data Access Component: version 2.8: implement linq to dac on .net cf 2.0 and .net cf 3.5. generate dynamic data proxy use reflection.emit.DNA Workstation: DNA Workstation 0.6: Physical memory manager added Grub supportDotNetNuke Russian Language packs: Russian Core Language Pack for DotNetNuke 05.03.01: Russian Core Language Pack for DotNetNuke 05.03.01 Русский языковый пакет для ядра DotNetNuke версии 05.03.01.DotNetNuke® Gallery: 04.03.01 Release Candidate: This is a RELEASE CANDIDATE of an update to fix a showstopper issue related to a mal-formed assembly: tagPrefix attribute that causes an exception ...DotNetNuke® Store: 02.01.31 RC: What's New in this release? Bugs corrected: - The PayPal gateway has been completly rewrited. Security fixes: - Several security fixes has been ap...EnhSim: Release v1.9.8.4: Release v1.9.8.4 Various fixes in GUI only for missing trinkets in dropdowns and mis-spelling of trinket names that would have caused sim to ignore...Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.6 beta 2 Released: Hi, This release contains following enhancements. * It implements “Zoom out” and “Show all” functionality over Zooming. Now, user will be abl...GeoJSON.NET: GeoJSON.NET 0.1: GeoJSON.NET 0.1Goblin XNA: Goblin XNA v3.4: This is Goblin XNA Version 3.4 release. The installation process is much simpler now!! Updates: 1. Inlcuded DShowNET.dll and Lidgren.Network.dll...jQuery UI DotNetNuke integration: jQuery Core module 0.6.0: This module package contains the core functionality for the jQuery UI integration. It includes the web controls, infrastructural code for including...jQuery UI DotNetNuke integration: jQUI library 0.6.0 for developers (DEBUG): This release includes the debug built .dll, .pdb, and .xml files for developers wanting to build DNN components using the jQuery UI widgets. The .d...jQuery UI DotNetNuke integration: jQUI library 0.6.0 for developers (RELEASE): This release includes the release built .dll and .xml files for developers wanting to build DNN components using the jQuery UI widgets. The .dll is...Kooboo blog: Kooboo CMS Blog Module for 2.1.0.0: Compatible with Kooboo cms 2.1.0.0 Upgrade to MVC 2Numina Application/Security Framework: Numina.Framework Core 50018: Added bulk import user page Added General settings page for updating Company Name, Theme, and API Key Add/Edit application calls Full URL to h...Open NFe: DANFe v1.9.8.1: Contribuições de Marco Aurélio (McSoft) e Leandro (Studio Classic). Melhorias de menus de contexto, tray icon, tela de configurações, correção do e...Pocket Wiki: wiki71.sbp: Incremental version (source .70 to .71) included changes are: disappearing horizontal bar fixed if you enter an invalid home page directory, you...PowerExt: v1.0 (Alpha 1): v1.0 (Alpha 1). PowerExt can display information such as assembly name, assembly version, public key etc in Explorer's File Properties dialog.RIA Services DataFilter Control for Silverlight: April 2010: Whats new?Updated to RC Added localizer for FilterOperator's (look at documentation) Added localizer for SortDirection's (look at documentation...SharePoint Labs: SPLab4006A-FRA-Level100: SPLab4006A-FRA-Level100 This SharePoint Lab will teach you the 6th best practice you should apply when writing code with the SharePoint API. Lab La...SharePoint Labs: SPLab6002A-FRA-Level300: SPLab6002A-FRA-Level300 This SharePoint Lab will teach you how to create a reusable and distributable project model for developping Feature Receive...SharePoint Labs: SPLab6003A-FRA-Level100: SPLab6003A-FRA-Level100 This SharePoint Lab will teach you how to create a custom Permission Level within Visual Studio. Lab Language : French Lab ...SimpleGeo.NET: SimpleGeo.NET 0.1: SimpleGeo.NET 0.1sosql: SoSql v1.0.0: Initial release of SoSql. This supports most of the basic parameters that osql does. sosql - SQL Server Command Line Tool Version 1.0.0.28646 usa...Syringe: Syringe 1.0 (source): Source code for Syringe 1.0Toast (for ASP.NET MVC): Toast (for ASP.NET MVC) 0.2.1: First releasetrx2html: trx2html 0.6: This version supports VS2005, VS2008 and VS2010 trx generated files with the same binaries targeting .Net 2.0Twitter Directory: TwitterDirectory 44240: Initial release See Numina Application/Security Framework for info on how to setup this application.Unit Test Specification Generator: TestDocs 1.0.2.1: Improved the performance when locating dependencies.VCC: Latest build, v2.1.30409.0: Automatic drop of latest buildvisinia: visinia_BETA_Src: The beta version is on its way, now you can drag a module and drop it on the webpage, all this dynamic side of visinia is built through the jquery,...Windows Azure - PHP contributions: PhpAzureExtensions (Azure Drives) - 0.1.1: Extension for use with Windows Azure SDK 1.1! Usage is described here.Most Popular ProjectsWBFS ManagerRawrMicrosoft SQL Server Product Samples: DatabaseASP.NET Ajax LibrarySilverlight ToolkitAJAX Control ToolkitWindows Presentation Foundation (WPF)ASP.NETMicrosoft SQL Server Community & SamplesFacebook Developer ToolkitMost Active ProjectsnopCommerce. Open Source online shop e-commerce solution.Shweet: SharePoint 2010 Team Messaging built with PexRawrAutoPocopatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterNB_Store - Free DotNetNuke Ecommerce Catalog ModuleFacebook Developer ToolkitFarseer Physics EngineNcqrs Framework - The CQRS framework for .NET

    Read the article

  • Metro: Namespaces and Modules

    - by Stephen.Walther
    The goal of this blog entry is to describe how you can use the Windows JavaScript (WinJS) library to create namespaces. In particular, you learn how to use the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. You also learn how to hide private methods by using the module pattern. Why Do We Need Namespaces? Before we do anything else, we should start by answering the question: Why do we need namespaces? What function do they serve? Do they just add needless complexity to our Metro applications? After all, plenty of JavaScript libraries do just fine without introducing support for namespaces. For example, jQuery has no support for namespaces and jQuery is the most popular JavaScript library in the universe. If jQuery can do without namespaces, why do we need to worry about namespaces at all? Namespaces perform two functions in a programming language. First, namespaces prevent naming collisions. In other words, namespaces enable you to create more than one object with the same name without conflict. For example, imagine that two companies – company A and company B – both want to make a JavaScript shopping cart control and both companies want to name the control ShoppingCart. By creating a CompanyA namespace and CompanyB namespace, both companies can create a ShoppingCart control: a CompanyA.ShoppingCart and a CompanyB.ShoppingCart control. The second function of a namespace is organization. Namespaces are used to group related functionality even when the functionality is defined in different physical files. For example, I know that all of the methods in the WinJS library related to working with classes can be found in the WinJS.Class namespace. Namespaces make it easier to understand the functionality available in a library. If you are building a simple JavaScript application then you won’t have much reason to care about namespaces. If you need to use multiple libraries written by different people then namespaces become very important. Using WinJS.Namespace.define() In the WinJS library, the most basic method of creating a namespace is to use the WinJS.Namespace.define() method. This method enables you to declare a namespace (of arbitrary depth). The WinJS.Namespace.define() method has the following parameters: · name – A string representing the name of the new namespace. You can add nested namespace by using dot notation · members – An optional collection of objects to add to the new namespace For example, the following code sample declares two new namespaces named CompanyA and CompanyB.Controls. Both namespaces contain a ShoppingCart object which has a checkout() method: // Create CompanyA namespace with ShoppingCart WinJS.Namespace.define("CompanyA"); CompanyA.ShoppingCart = { checkout: function (){ return "Checking out from A"; } }; // Create CompanyB.Controls namespace with ShoppingCart WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); // Call CompanyA ShoppingCart checkout method console.log(CompanyA.ShoppingCart.checkout()); // Writes "Checking out from A" // Call CompanyB.Controls checkout method console.log(CompanyB.Controls.ShoppingCart.checkout()); // Writes "Checking out from B" In the code above, the CompanyA namespace is created by calling WinJS.Namespace.define(“CompanyA”). Next, the ShoppingCart is added to this namespace. The namespace is defined and an object is added to the namespace in separate lines of code. A different approach is taken in the case of the CompanyB.Controls namespace. The namespace is created and the ShoppingCart object is added to the namespace with the following single line of code: WinJS.Namespace.define( "CompanyB.Controls", { ShoppingCart: { checkout: function(){ return "Checking out from B"; } } } ); Notice that CompanyB.Controls is a nested namespace. The top level namespace CompanyB contains the namespace Controls. You can declare a nested namespace using dot notation and the WinJS library handles the details of creating one namespace within the other. After the namespaces have been defined, you can use either of the two shopping cart controls. You call CompanyA.ShoppingCart.checkout() or you can call CompanyB.Controls.ShoppingCart.checkout(). Using WinJS.Namespace.defineWithParent() The WinJS.Namespace.defineWithParent() method is similar to the WinJS.Namespace.define() method. Both methods enable you to define a new namespace. The difference is that the defineWithParent() method enables you to add a new namespace to an existing namespace. The WinJS.Namespace.defineWithParent() method has the following parameters: · parentNamespace – An object which represents a parent namespace · name – A string representing the new namespace to add to the parent namespace · members – An optional collection of objects to add to the new namespace The following code sample demonstrates how you can create a root namespace named CompanyA and add a Controls child namespace to the CompanyA parent namespace: WinJS.Namespace.define("CompanyA"); WinJS.Namespace.defineWithParent(CompanyA, "Controls", { ShoppingCart: { checkout: function () { return "Checking out"; } } } ); console.log(CompanyA.Controls.ShoppingCart.checkout()); // Writes "Checking out" One significant advantage of using the defineWithParent() method over the define() method is the defineWithParent() method is strongly-typed. In other words, you use an object to represent the base namespace instead of a string. If you misspell the name of the object (CompnyA) then you get a runtime error. Using the Module Pattern When you are building a JavaScript library, you want to be able to create both public and private methods. Some methods, the public methods, are intended to be used by consumers of your JavaScript library. The public methods act as your library’s public API. Other methods, the private methods, are not intended for public consumption. Instead, these methods are internal methods required to get the library to function. You don’t want people calling these internal methods because you might need to change them in the future. JavaScript does not support access modifiers. You can’t mark an object or method as public or private. Anyone gets to call any method and anyone gets to interact with any object. The only mechanism for encapsulating (hiding) methods and objects in JavaScript is to take advantage of functions. In JavaScript, a function determines variable scope. A JavaScript variable either has global scope – it is available everywhere – or it has function scope – it is available only within a function. If you want to hide an object or method then you need to place it within a function. For example, the following code contains a function named doSomething() which contains a nested function named doSomethingElse(): function doSomething() { console.log("doSomething"); function doSomethingElse() { console.log("doSomethingElse"); } } doSomething(); // Writes "doSomething" doSomethingElse(); // Throws ReferenceError You can call doSomethingElse() only within the doSomething() function. The doSomethingElse() function is encapsulated in the doSomething() function. The WinJS library takes advantage of function encapsulation to hide all of its internal methods. All of the WinJS methods are defined within self-executing anonymous functions. Everything is hidden by default. Public methods are exposed by explicitly adding the public methods to namespaces defined in the global scope. Imagine, for example, that I want a small library of utility methods. I want to create a method for calculating sales tax and a method for calculating the expected ship date of a product. The following library encapsulates the implementation of my library in a self-executing anonymous function: (function (global) { // Public method which calculates tax function calculateTax(price) { return calculateFederalTax(price) + calculateStateTax(price); } // Private method for calculating state tax function calculateStateTax(price) { return price * 0.08; } // Private method for calculating federal tax function calculateFederalTax(price) { return price * 0.02; } // Public method which returns the expected ship date function calculateShipDate(currentDate) { currentDate.setDate(currentDate.getDate() + 4); return currentDate; } // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); })(this); // Show expected ship date var shipDate = CompanyA.Utilities.calculateShipDate(new Date()); console.log(shipDate); // Show price + tax var price = 12.33; var tax = CompanyA.Utilities.calculateTax(price); console.log(price + tax); In the code above, the self-executing anonymous function contains four functions: calculateTax(), calculateStateTax(), calculateFederalTax(), and calculateShipDate(). The following statement is used to expose only the calcuateTax() and the calculateShipDate() functions: // Export public methods WinJS.Namespace.define("CompanyA.Utilities", { calculateTax: calculateTax, calculateShipDate: calculateShipDate } ); Because the calculateTax() and calcuateShipDate() functions are added to the CompanyA.Utilities namespace, you can call these two methods outside of the self-executing function. These are the public methods of your library which form the public API. The calculateStateTax() and calculateFederalTax() methods, on the other hand, are forever hidden within the black hole of the self-executing function. These methods are encapsulated and can never be called outside of scope of the self-executing function. These are the internal methods of your library. Summary The goal of this blog entry was to describe why and how you use namespaces with the WinJS library. You learned how to define namespaces using both the WinJS.Namespace.define() and WinJS.Namespace.defineWithParent() methods. We also discussed how to hide private members and expose public members using the module pattern.

    Read the article

  • Book Review - Programming Windows Azure by Siriram Krishnan

    - by BuckWoody
    As part of my professional development, I’ve created a list of books to read throughout the year, starting in June of 2011. This a review of the first one, called Programming Windows Azure by Siriram Krishnan. You can find my entire list of books I’m reading for my career here: http://blogs.msdn.com/b/buckwoody/archive/2011/06/07/head-in-the-clouds-eyes-on-the-books.aspx  Why I Chose This Book: As part of my learning style, I try to read multiple books about a single subject. I’ve found that at least 3 books are necessary to get the right amount of information to me. This is a “technical” work, meaning that it deals with technology and not business, writing or other facets of my career. I’ll have a mix of all of those as I read along. I chose this work in addition to others I’ve read since it covers everything from an introduction to more advanced topics in a single book. It also has some practical examples of actually working with the product, particularly on storage. Although it’s dated, many examples normally translate. I also saw that it had pretty good reviews. What I learned: I learned a great deal about storage, and many useful code snippets. I do think that there could have been more of a focus on the application fabric - but of course that wasn’t as mature a feature when this book was written. I learned some great architecture examples, and in one section I learned more about encryption. In that example, however, I would rather have seen the examples go the other way - the book focused on moving data from on-premise to Azure storage in an encrypted fashion. Using the Application Fabric I would rather see sensitive data left in a hybrid fashion on premise, and connect to for the Azure application. Even so, the examples were very useful. If you’re looking for a good “starter” Azure book, this is a good choice. I also recommend the last chapter as a quick read for a DBA, or Database Administrator. It’s not very long, but useful. Note that the limits described are incorrect - which is one of the dangers of reading a book about any cloud offering. The services offered are updated so quickly that the information is in constant danger of being “stale”. Even so, I found this a useful book, which I believe will help me work with Azure better. Raw Notes: I take notes as I read, calling that process “reading with a pencil”. I find that when I do that I pay attention better, and record some things that I need to know later. I’ll take these notes, categorize them into a OneNote notebook that I synchronize in my Live.com account, and that way I can search them from anywhere. I can even read them on the web, since the Live.com has a OneNote program built in. Note that these are the raw notes, so they might not make a lot of sense out of context - I include them here so you can watch my though process. Programming Windows Azure by Siriram Krishnan: Learning about how to select applications suitable for Distributed Technology. Application Fabric gets the least attention; probably because it was newer at the time. Very clear (Chapter One) Good foundation Background and history, but not too much I normally arrange my descriptions differently, starting with the use-cases and moving to physicality, but this difference helps me. Interesting that I am reading this using Safari Books Online, which uses many of these concepts. Taught me some new aspects of a Hypervisor – very low-level information about the Azure Fabric (not to be confused with the Application Fabric feature) (Chapter Two) Good detail of what is included in the SDK. Even more is available now. CS = Cloud Service (Chapter 3) Place Storage info in the configuration file, since it can be streamed in-line with a running app. Ditto for logging, and keep separated configs for staging and testing. Easy-switch in and switch out.  (Chapter 4) There are two Runtime API’s, one of external and one for internal. Realizing how powerful this paradigm really is. Some places seem light, and to drop off but perhaps that’s best. Managing API is not charged, which is nice. I don’t often think about the price, until it comes to an actual deployment (Chapter 5) Csmanage is something I want to dig into deeper. API requires package moves to Blob storage first, so it needs a URL. Csmanage equivalent can be written in Unix scripting using openssl. Upgrades are possible, and you use the upgradeDomainCount attribute in the Service-Definition.csdef file  Always use a low-privileged account to test on the dev fabric, since Windows Azure runs in partial trust. Full trust is available, but can be dangerous and must be well-thought out. (Chapter 6) Learned how to run full CMD commands in a web window – not that you would ever do that, but it was an interesting view into those links. This leads to a discussion on hosting other runtimes (such as Java or PHP) in Windows Azure. I got an expanded view on this process, although this is where the book shows its age a little. Books can be a problem for Cloud Computing for this reason – things just change too quickly. Windows Azure storage is not eventually consistent – it is instantly consistent with multi-phase commit. Plumbing for this is internal, not required to code that. (Chapter 7) REST API makes the service interoperable, hybrid, and consistent across code architectures. Nicely done. Use affinity groups to keep data and code together. Side note: e-book readers need a common “notes” feature. There’s a decent quick description of REST in this chapter. Learned about CloudDrive code – PowerShell sample that mounts Blob storage as a local provider. Works against Dev fabric by default, can be switched to Account. Good treatment in the storage chapters on the differences between using Dev storage and Azure storage. These can be mitigated. No, blobs are not of any size or number. Not a good statement (Chapter 8) Blob storage is probably Azure’s closest play to Infrastructure as a Service (Iaas). Blob change operations must be authenticated, even when public. Chapters on storage are pretty in-depth. Queue Messages are base-64 encoded (Chapter 9) The visibility timeout ensures processing of message in a disconnected system. Order is not guaranteed for a message, so if you need that set an increasing number in the queue mechanism. While Queues are accessible via REST, they are not public and are secured by default. Interesting – the header for a queue request includes an estimated count. This can be useful to create more worker roles in a dynamic system. Each Entity (row) in the Azure Table service is atomic – all or nothing. (Chapter 10) An entity can have up to 255 Properties  Use “ID” for the class to indicate the key value, or use the [DataServiceKey] Attribute.  LINQ makes working with the Azure Table Service much easier, although Interop is certainly possible. Good description on the process of selecting the Partition and Row Key.  When checking for continuation tokens for pagination, include logic that falls out of the check in case you are at the last page.  On deleting a storage object, it is instantly unavailable, however a background process is dispatched to perform the physical deletion. So if you want to re-create a storage object with the same name, add retry logic into the code. Interesting approach to deleting an index entity without having to read it first – create a local entity with the same keys and apply it to the Azure system regardless of change-state.  Although the “Indexes” description is a little vague, it’s interesting to see a Folding and Stemming discussion a-la the Porter Stemming Algorithm. (Chapter 11)  Presents a better discussion of indexes (at least inverted indexes) later in the chapter. Great treatment for DBA’s in Chapter 11. We need to work on getting secondary indexes in Table storage. There is a limited form of transactions called “Entity Group Transactions” that, although they have conditions, makes a transactional system more possible. Concurrency also becomes an issue, but is handled well if you’re using Data Services in .NET. It watches the Etag and allows you to take action appropriately. I do not recommend using Azure as a location for secure backups. In fact, I would rather have seen the examples in (Chapter 12) go the other way, showing how data could be brought back to a local store as a DR or HA strategy. Good information on cryptography and so on even so. Chapter seems out of place, and should be combined with the Blob chapter.  (Chapter 13) on SQL Azure is dated, although the base concepts are OK.  Nice example of simple ADO.NET access to a SQL Azure (or any SQL Server Really) database.  

    Read the article

  • The future for Microsoft

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/16/the-future-for-microsoft.aspxMicrosoft is in the process of reinventing itself. While some may argue that it’s “too little, too late” or that their growing consumer-focused strategy is wrong, the truth of the situation is that Microsoft is reinventing itself into a new company. While Microsoft is now calling themselves a “devices and services” company, that’s not entirely accurate. Let’s look at some facts: Microsoft will always (for the long-term foreseeable future) be financially split into the following divisions: Windows/Operating Systems, which for FY13 made up approximately 24% of overall revenue. Server and Tools, which for FY13 made up approximately 26% of overall revenue. Enterprise/Business Products, which for FY13 made up approximately 32% of overall revenue. Entertainment and Devices, which for FY13 made up approximately 13% of overall revenue. Online Services, which for FY13 made up approximately 4% of overall revenue. It is important to realize that hardware products like the Surface fall under the Windows/Operating Systems division while products like the Xbox 360 fall under the Entertainment and Devices division. (Presumably other hardware, such as mice, keyboards, and cameras, also fall under the Entertainment and Devices division.) It’s also unclear where Microsoft’s recent acquisition of Nokia’s handset division will fall, but let’s assume that it will be under Entertainment and Devices as well. Now, for the sake of argument, let’s assume a slightly different structure that I think is more in line with how Microsoft presents itself and how the general public sees it: Consumer Products and Devices, which would probably make up approximately 9% of overall revenue. Developer Tools, which would probably make up approximately 13% of overall revenue. Enterprise Products and Devices, which would probably make up approximately 47% of overall revenue. Entertainment, which would probably make up approximately 13% of overall revenue. Online Services, which would probably make up approximately 17% of overall revenue. (Just so we’re clear, in this structure hardware products like the Surface, a portion of Windows sales, and other hardware fall under the Consumer Products and Devices division. I’m assuming that more of the income for the Windows division is coming from enterprise/volume licenses so 15% of that income went to the Enterprise Products and Devices division. Most of the enterprise services, like Azure, fall under the Online Services division so half of the Server and Tools income went there as well.) No matter how you look at it, the bulk of Microsoft’s income still comes from not just the enterprise but also software sales, and this really shouldn’t surprise anyone. So, now that the stage is set…what’s the future for Microsoft? The future I see for Microsoft (again, this is just my prediction based on my own instinct, gut-feel and publicly available information) is this: Microsoft is becoming a consumer-focused enterprise company. Let’s look at it a different way. Microsoft is an enterprise-focused company trying to create a larger consumer presence.  To a large extent, this is the exact opposite of Apple, who is really a consumer-focused company trying to create a larger enterprise presence. The major reason consumer-focused companies (like Apple) have started making in-roads into the enterprise is the “bring your own device” phenomenon. Yes, Apple has created some “game-changing” products but their enterprise influence is still relatively small. Unfortunately (for this blog post at least), Apple provides revenue in terms of hardware products rather than business divisions, so it’s not possible to do a direct comparison. However, in the interest of transparency, from Apple’s Quarterly Report (filed 24 July 2013), their revenue breakdown is: iPhone, which for the 3 months ending 29 June 2013 made up approximately 51% of revenue. iPad, which for the 3 months ending 29 June 2013 made up approximately 18% of revenue. Mac, which for the 3 months ending 29 June 2013 made up approximately 14% of revenue. iPod, which for the 3 months ending 29 June 2013 made up approximately 2% of revenue. iTunes, Software, and Services, which for the 3 months ending 29 June 2013 made up approximately 11% of revenue. Accessories, which for the 3 months ending 29 July 2013 made up approximately 3% of revenue. From this, it’s pretty clear that Apple is a consumer-and-hardware-focused company. At this point, you may be asking yourself “Where is all of this going?” The answer to that lies in Microsoft’s shift in company focus. They are becoming more consumer focused, but what exactly does that mean? The biggest change (at least that’s been in the news lately) is the pending purchase of Nokia’s handset division. This, in combination with their Surface line of tablets and the Xbox, will put Microsoft squarely in the realm of a hardware-focused company in addition to being a software-focused company. That can (and most likely will) shift the revenue split to looking at revenue based on software sales (both consumer and enterprise) and also hardware sales (mostly on the consumer side). If we look at things strictly from a Windows perspective, Microsoft clearly has a lot of irons in the fire at the moment. Discounting the various product SKUs available and painting the picture with broader strokes, there are currently 5 different Windows-based operating systems: Windows Phone Windows Phone 7.x, which runs on top of the Windows CE kernel Windows Phone 8.x+, which runs on top of the Windows 8 kernel Windows RT The ARM-based version of Windows 8, which runs on top of the Windows 8 kernel Windows (Pro) The Intel-based version of Windows 8, which runs on top of the Windows 8 kernel Xbox The Xbox 360, which runs it’s own proprietary OS. The Xbox One, which runs it’s own proprietary OS, a version of Windows running on top of the Windows 8 kernel and a proprietary “manager” OS which manages the other two. Over time, Windows Phone 7.x devices will fade so that really leaves 4 different versions. Looking at Windows RT and Windows Phone 8.x paints an interesting story. Right now, all mobile phone devices run on some sort of ARM chip and that doesn’t look like it will change any time soon. That means Microsoft has two different Windows based operating systems for the ARM platform. Long term, it doesn’t make sense for Microsoft to continue supporting that arrangement. I have long suspected (since the Surface was first announced) that Microsoft will unify these two variants of Windows and recent speculation from some of the leading Microsoft watchers lends credence to this suspicion. It is rumored that upcoming Windows Phone releases will include support for larger screen sizes, relax the requirement to have a hardware-based back button and will continue to improve API parity between Windows Phone and Windows RT. At the same time, Windows RT will include support for smaller screen sizes. Since both of these operating systems are based on the same core Windows kernel, it makes sense (both from a financial and development resource perspective) for Microsoft to unify them. The user interfaces are already very similar. So similar in fact, that visually it’s difficult to tell them apart. To illustrate this, here are two screen captures: Other than a few variations (the Bing News app, the picture shown in the Pictures tile and the spacing between the tiles) these are identical. The one on the left is from my Windows 8.1 laptop (which looks the same as on my Surface RT) and the one on the right is from my Windows Phone 8 Lumia 925. This pretty clearly shows that from a consumer perspective, there really is no practical difference between how these two operating systems look and how you interact with them. For the consumer, your entertainment device (Xbox One), phone (Windows Phone) and mobile computing device (Surface [or some other vendors tablet], laptop, netbook or ultrabook) and your desktop computing device (desktop) will all look and feel the same. While many people will denounce this consistency of user experience, I think this will be a good thing in the long term, especially for the upcoming generations. For example, my 5-year old son knows how to use my tablet, phone and Xbox because they all feature nearly identical user experiences. When Windows 8 was released, Microsoft allowed a Windows Store app to be purchased once and installed on as many as 5 devices. With Windows 8.1, this limit has been increased to over 50. Why is that important? If you consider that your phone, computing devices, and entertainment device will be running the same operating system (with minor differences related to physical hardware chipset), that means that I could potentially purchase my sons favorite Angry Birds game once and be able to install it on all of the devices I own. (And for those of you wondering, it’s only 7 [at the moment].) From an app developer perspective, the story becomes even more compelling. Right now there are differences between the different operating systems, but those differences are shrinking. The user interface technology for both is XAML but there are different controls available and different user experience concepts. Some of the APIs available are the same while some are not. You can’t develop a Windows Phone app that can also run on Windows (either Windows Pro or RT). With each release of Windows Phone and Windows RT, those difference become smaller and smaller. Add to this mix the Xbox One, which will also feature a Windows-based operating system and the same “modern” (tile-based) user interface and the visible distinctions between the operating systems will become even smaller. Unifying the operating systems means one set of APIs and one code base to maintain for an app that can run on multiple devices. One code base means it’s easier to add features and fix bugs and that those changes become available on all devices at the same time. It also means a single app store, which will increase the discoverability and reach of your app and consolidate revenue and app profile management. Now, the choice of what devices an app is available on becomes a simple checkbox decision rather than a technical limitation. Ultimately, this means more apps available to consumers, which is always good for the app ecosystem. Is all of this just rumor, speculation and conjecture? Of course, but it’s not unfounded. As I mentioned earlier, some of the prominent Microsoft watchers are also reporting similar rumors. However, Microsoft itself has even hinted at this future with their recent organizational changes and by telling developers “if you want to develop for Xbox One, start developing for Windows 8 now.” I think this pretty clearly paints the following picture: Microsoft is committed to the “modern” user interface paradigm. Microsoft is changing their release cadence (for all products, not just operating systems) to be faster and more modular. Microsoft is going to continue to unify their OS platforms both from a consumer perspective and a developer perspective. While this direction will certainly concern some people it will excite many others. Microsoft’s biggest failing has always been following through with a strong and sustained marketing strategy that presents a consistent view point and highlights what this unified and connected experience looks like and how it benefits consumers and enterprises. We’ve started to see some of this over the last few years, but it needs to continue and become more aggressive and consistent. In the long run, I think Microsoft will be able to pull all of these technologies and devices together into one seamless ecosystem. It isn’t going to happen overnight, but my prediction is that we will be there by the end of 2016. As both a consumer and a developer, I, for one, am excited about the future of Microsoft.

    Read the article

  • [GEEK SCHOOL] Network Security 2: Preventing Disaster with User Account Control

    - by Ciprian Rusen
    In this second lesson in our How-To Geek School about securing the Windows devices in your network, we will talk about User Account Control (UAC). Users encounter this feature each time they need to install desktop applications in Windows, when some applications need administrator permissions in order to work and when they have to change different system settings and files. UAC was introduced in Windows Vista as part of Microsoft’s “Trustworthy Computing” initiative. Basically, UAC is meant to act as a wedge between you and installing applications or making system changes. When you attempt to do either of these actions, UAC will pop up and interrupt you. You may either have to confirm you know what you’re doing, or even enter an administrator password if you don’t have those rights. Some users find UAC annoying and choose to disable it but this very important security feature of Windows (and we strongly caution against doing that). That’s why in this lesson, we will carefully explain what UAC is and everything it does. As you will see, this feature has an important role in keeping Windows safe from all kinds of security problems. In this lesson you will learn which activities may trigger a UAC prompt asking for permissions and how UAC can be set so that it strikes the best balance between usability and security. You will also learn what kind of information you can find in each UAC prompt. Last but not least, you will learn why you should never turn off this feature of Windows. By the time we’re done today, we think you will have a newly found appreciation for UAC, and will be able to find a happy medium between turning it off completely and letting it annoy you to distraction. What is UAC and How Does it Work? UAC or User Account Control is a security feature that helps prevent unauthorized system changes to your Windows computer or device. These changes can be made by users, applications, and sadly, malware (which is the biggest reason why UAC exists in the first place). When an important system change is initiated, Windows displays a UAC prompt asking for your permission to make the change. If you don’t give your approval, the change is not made. In Windows, you will encounter UAC prompts mostly when working with desktop applications that require administrative permissions. For example, in order to install an application, the installer (generally a setup.exe file) asks Windows for administrative permissions. UAC initiates an elevation prompt like the one shown earlier asking you whether it is okay to elevate permissions or not. If you say “Yes”, the installer starts as administrator and it is able to make the necessary system changes in order to install the application correctly. When the installer is closed, its administrator privileges are gone. If you run it again, the UAC prompt is shown again because your previous approval is not remembered. If you say “No”, the installer is not allowed to run and no system changes are made. If a system change is initiated from a user account that is not an administrator, e.g. the Guest account, the UAC prompt will also ask for the administrator password in order to give the necessary permissions. Without this password, the change won’t be made. Which Activities Trigger a UAC Prompt? There are many types of activities that may trigger a UAC prompt: Running a desktop application as an administrator Making changes to settings and files in the Windows and Program Files folders Installing or removing drivers and desktop applications Installing ActiveX controls Changing settings to Windows features like the Windows Firewall, UAC, Windows Update, Windows Defender, and others Adding, modifying, or removing user accounts Configuring Parental Controls in Windows 7 or Family Safety in Windows 8.x Running the Task Scheduler Restoring backed-up system files Viewing or changing the folders and files of another user account Changing the system date and time You will encounter UAC prompts during some or all of these activities, depending on how UAC is set on your Windows device. If this security feature is turned off, any user account or desktop application can make any of these changes without a prompt asking for permissions. In this scenario, the different forms of malware existing on the Internet will also have a higher chance of infecting and taking control of your system. In Windows 8.x operating systems you will never see a UAC prompt when working with apps from the Windows Store. That’s because these apps, by design, are not allowed to modify any system settings or files. You will encounter UAC prompts only when working with desktop programs. What You Can Learn from a UAC Prompt? When you see a UAC prompt on the screen, take time to read the information displayed so that you get a better understanding of what is going on. Each prompt first tells you the name of the program that wants to make system changes to your device, then you can see the verified publisher of that program. Dodgy software tends not to display this information and instead of a real company name, you will see an entry that says “Unknown”. If you have downloaded that program from a less than trustworthy source, then it might be better to select “No” in the UAC prompt. The prompt also shares the origin of the file that’s trying to make these changes. In most cases the file origin is “Hard drive on this computer”. You can learn more by pressing “Show details”. You will see an additional entry named “Program location” where you can see the physical location on your hard drive, for the file that’s trying to perform system changes. Make your choice based on the trust you have in the program you are trying to run and its publisher. If a less-known file from a suspicious location is requesting a UAC prompt, then you should seriously consider pressing “No”. What’s Different About Each UAC Level? Windows 7 and Windows 8.x have four UAC levels: Always notify – when this level is used, you are notified before desktop applications make changes that require administrator permissions or before you or another user account changes Windows settings like the ones mentioned earlier. When the UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This is the most secure and also the most annoying way to set UAC because it triggers the most UAC prompts. Notify me only when programs/apps try to make changes to my computer (default) – Windows uses this as the default for UAC. When this level is used, you are notified before desktop applications make changes that require administrator permissions. If you are making system changes, UAC doesn’t show any prompts and it automatically gives you the necessary permissions for making the changes you desire. When a UAC prompt is shown, the desktop is dimmed and you must choose “Yes” or “No” before you can do anything else. This level is slightly less secure than the previous one because malicious programs can be created for simulating the keystrokes or mouse moves of a user and change system settings for you. If you have a good security solution in place, this scenario should never occur. Notify me only when programs/apps try to make changes to my computer (do not dim my desktop) – this level is different from the previous in in the fact that, when the UAC prompt is shown, the desktop is not dimmed. This decreases the security of your system because different kinds of desktop applications (including malware) might be able to interfere with the UAC prompt and approve changes that you might not want to be performed. Never notify – this level is the equivalent of turning off UAC. When using it, you have no protection against unauthorized system changes. Any desktop application and any user account can make system changes without your permission. How to Configure UAC If you would like to change the UAC level used by Windows, open the Control Panel, then go to “System and Security” and select “Action Center”. On the column on the left you will see an entry that says “Change User Account Control settings”. The “User Account Control Settings” window is now opened. Change the position of the UAC slider to the level you want applied then press “OK”. Depending on how UAC was initially set, you may receive a UAC prompt requiring you to confirm this change. Why You Should Never Turn Off UAC If you want to keep the security of your system at decent levels, you should never turn off UAC. When you disable it, everything and everyone can make system changes without your consent. This makes it easier for all kinds of malware to infect and take control of your system. It doesn’t matter whether you have a security suite or antivirus installed or third-party antivirus, basic common-sense measures like having UAC turned on make a big difference in keeping your devices safe from harm. We have noticed that some users disable UAC prior to setting up their Windows devices and installing third-party software on them. They keep it disabled while installing all the software they will use and enable it when done installing everything, so that they don’t have to deal with so many UAC prompts. Unfortunately this causes problems with some desktop applications. They may fail to work after you enable UAC. This happens because, when UAC is disabled, the virtualization techniques UAC uses for your applications are inactive. This means that certain user settings and files are installed in a different place and when you turn on UAC, applications stop working because they should be placed elsewhere. Therefore, whatever you do, do not turn off UAC completely! Coming up next … In the next lesson you will learn about Windows Defender, what this tool can do in Windows 7 and Windows 8.x, what’s different about it in these operating systems and how it can be used to increase the security of your system.

    Read the article

  • CodePlex Daily Summary for Saturday, December 15, 2012

    CodePlex Daily Summary for Saturday, December 15, 2012Popular ReleasesAmarok Framework Library: 1.12: refactored agent-specific implementation introduced new interface representing important concepts like IDispatcher, IPublisher Refer to the Release Notes for details.sb0t v.5: sb0t 5.02b beta 2: Bug fix for Windows XP users (tab resize) Bug fix where ib0t stopped working A lot of command tweaksCRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1116.8): [FIX] Fixed issue not displaying CRM system button images correctly (incorrect path in file VisualRibbonEditor.exe.config)MySqlBackup.NET - MySQL Backup Solution for C#, VB.NET, ASP.NET: MySqlBackup.NET 1.5.6 beta: Fix Bug: If encryption is applied in Export Process, the generated encrypted SQL file is not able to import Stored Procedures, Functions, Triggers, Events and View. Fix Bug: In some unknown cases, the SHOW CREATE TABLE `tablename` query will return byte array. Improve 1: During Export, StreamWriter is opened and closed several times when writting to the dump file, which this is considered as not a good practice. Improve 2: SQL line in class Database method GetEvents: "SHOW EVENTS WHERE ...My Expenses Windows Store LOB App Demo: My Expenses Version 1: This is version 1 of the MyExpenses Windows 8 line of business demo app. The app is written in XAML and C#. It calls a WCF service that works with a SQL Server database. The app uses the Callisto toolkit. You can get it at https://github.com/timheuer/callisto. The Expenses.sql file contains the SQL to create the Expenses database. The ExpensesWCFService.zip file contains the WCF service, also written in C#. You should create a WCF service. Create an Entity Framework model and point it to...EasyTwitter: EasyTwitter basic operations: See the commit log to see the features!, check history files to see examples of how to use this libraryCommand Line Parser Library: 1.9.3.31 rc0: Main assembly CommandLine.dll signed. Removed old commented code. Added missing XML documentation comments. Two (very) minor code refactoring changes.BlackJumboDog: Ver5.7.4: 2012.12.13 Ver5.7.4 (1)Web???????、???????????????????????????????????????????VFPX: ssClasses A1.0: My initial release. See https://vfpx.codeplex.com/wikipage?title=ssClasses&referringTitle=Home for a brief description of what is inside this releaseHome Access Plus+: v8.6: v8.6.1213.1220 Added: Group look up to the visible property of the Booking System Fixed: Switched to using the outlook/exchange thumbnailPhoto instead of jpegPhoto Added: Add a blank paragraph below the tiles. This means that the browser displays a vertical scroller when resizing the window. Previously it was possible for the bottom edge of a tile not to be visible if the browser window was resized. Added: Booking System: Only Display Day+Month on the booking Home Page. This allows for the cs...Layered Architecture Solution Guidance (LASG): LASG 1.0.0.8 for Visual Studio 2012: PRE-REQUISITES Open GAX (Please install Oct 4, 2012 version) Microsoft® System CLR Types for Microsoft® SQL Server® 2012 Microsoft® SQL Server® 2012 Shared Management Objects Microsoft Enterprise Library 5.0 (for the generated code) Windows Azure SDK (for layered cloud applications) Silverlight 5 SDK (for Silverlight applications) THE RELEASE This release only works on Visual Studio 2012. Known Issue If you choose the Database project, the solution unfolding time will be slow....Fiskalizacija za developere: FiskalizacijaDev 2.0: Prva prava produkcijska verzija - Zakon je tu, ova je verzija uskladena sa trenutno važecom Tehnickom specifikacijom (v1.2. od 04.12.2012.) i spremna je za produkcijsko korištenje. Verzije iza ove ce ovisiti o naknadnim izmjenama Zakona i/ili Tehnicke specifikacije, odnosno, o eventualnim greškama u radu/zahtjevima community-a za novim feature-ima. Novosti u v2.0 su: - That assembly does not allow partially trusted callers (http://fiskalizacija.codeplex.com/workitem/699) - scheme IznosType...Simple Injector: Simple Injector v1.6.1: This patch release fixes a bug in the integration libraries that disallowed the application to start when .NET 4.5 was not installed on the machine (but only .NET 4.0). The following packages are affected: SimpleInjector.Integration.Web.dll SimpleInjector.Integration.Web.Mvc.dll SimpleInjector.Integration.Wcf.dll SimpleInjector.Extensions.LifetimeScoping.dllBootstrap Helpers: Version 1: First releasesheetengine - Isometric HTML5 JavaScript Display Engine: sheetengine v1.2.0: Main featuresOptimizations for intersectionsThe main purpose of this release was to further optimize rendering performance by skipping object intersections with other sheets. From now by default an object's sheets will only intersect its own sheets and never other static or dynamic sheets. This is the usual scenario since objects will never bump into other sheets when using collision detection. DocumentationMany of you have been asking for proper documentation, so here it goes. Check out the...DirectX Tool Kit: December 11, 2012: December 11, 2012 Ex versions of DDSTextureLoader and WICTextureLoader Removed use of ATL's CComPtr in favor of WRL's ComPtr for all platforms to support VS Express editions Updated VS 2010 project for official 'property sheet' integration for Windows 8.0 SDK Minor fix to CommonStates for Feature Level 9.1 Tweaked AlphaTestEffect.cpp to work around ARM NEON compiler codegen bug Added dxguid.lib as a default library for Debug builds to resolve GUID link issuesArcGIS Editor for OpenStreetMap: ArcGIS Editor for OSM 2.1 Final for 10.1: We are proud to announce the release of ArcGIS Editor for OpenStreetMap version 2.1. This download is compatible with ArcGIS 10.1, and includes setups for the Desktop Component, Desktop Component when 64 bit Background Geoprocessing is installed, and the Server Component. Important: if you already have ArcGIS Editor for OSM installed but want to install this new version, you will need to uninstall your previous version and then install this one. This release includes support for the ArcGIS 1...SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8.2: This release just contains some fixes that have been done since the last release. Plus, this is strong named as well. I apologize for the lack of updates but my free time is less these days.Media Companion: MediaCompanion3.511b release: Two more bug fixes: - General Preferences were not getting restored - Fanart and poster image files were being locked, preventing changes to themVodigi Open Source Interactive Digital Signage: Vodigi Release 5.5: The following enhancements and fixes are included in Vodigi 5.5. Vodigi Administrator - Manage Music Files - Add Music Files to Image Slide Shows - Manage System Messages - Display System Messages to Users During Login - Ported to Visual Studio 2012 and MVC 4 - Added New Vodigi Administrator User Guide Vodigi Player - Improved Login/Schedule Startup Procedure - Startup Using Last Known Schedule when Disconnected on Startup - Improved Check for Schedule Changes - Now Every 15 Minutes - Pla...New Projects1Q86: testAdd Ratings to SharePoint Blog Site: This web part is an ‘administrative’ level tool used to update a Blog site to a) enable ratings on the post and b) add the User Ratings web part to the site. BadmintonBuddy: Source codeBetter Place Admin: Admin client for the BetterPlaceBooking ProjectBorkoSi: BorkoSi WebPage Source code.ChimpHD - 2D Evolved: High quality OpenCL accelerated 2D engine, capable of full complex yet easy to code 2D games. This is SpaceChimp2.0, wrote from the ground up, not just patchedCoderJoe: Repository for code samples used in my blog.CodeTemplate: ??????Data Persistent Object: ORM tool to access SQL Server, log record changes, Json and Linq supportedDevian: a simple web portal based on asp.net 2.0HD: hdHospital Tracking: hospital patient tracking ????? ???? ??????? ???? ?????Instant Get: An auction based c# applicationipangolin: DICOM stands for Digital Imaging and COmmunication in Medicine. The DICOM standard addresses the basic connectivity between different imaging devices.ISEFun: PowerShell module(s) to simplify work in it. It contains PowerShell scripts, compiled libs and some formating files. Several modules will come in one batch as optional features.Kerjasama: Aplikasi Database Kerjasama Bagian Kerjasama Kota SemarangMCEBuddy Viewer: Windows Media Center Plugin for MCEBuddy 2.x MusicForMyBlog: Link your recently played music in Itunes to your blog.My Expenses Windows Store LOB App Demo: This is a sample Windows 8 LOB app. Employees can use this app to create and submit expense reports. And managers can approve or reject those reports.Node Paint: An app based on the nodegarden application created by alphalabs.ccNPhysics: NPhysics - Physical Data Types for .NETomr.domready.js: Dom is ready? This project provides easy to detect ready event of dom.RegSecEdit: set registry security from command line, or batch file.Sitecore - Logger Module: This module provides an abstraction of the built-in Sitecore Logging features.SMART LMS: Welcome to SMART LMS, a learning management system with a difference, this software will allow teachers to create custom education activities such as quizzes.TaskManagerDD: DotNetNuke task manager tutorialThe Reactive Extensions for JavaScript: RxJS or Reactive Extensions for JavaScript is a library for transforming, composing, and querying streams of data.Timestamp: Tool for generating timestamps into clipboard for fast use.TweetGarden: Visualising connected users using their tweetsUpdateContentType: UpdateContentType atualiza os modelos de documentos utilizados por tipos de conteúdo do SharePoint a partir de documentos armazenados em uma biblioteca.User Rating Web Part for SharePoint 2010: User Rating Web Part - the fix for Ratings in SharePoint!webget: webgetwhatsnew.exe a command line utility to find new files: whatsnew.exe is a command line utility that lists the files created (new files) in a given number of days. whatsnew.exe 's syntax is very simple: whatsnew path numberofdays Also whatsnew supports other options like HTML or XML output, hyperlinked outputs and more.whoami: ip address resolverWPFReview: WPF code sample

    Read the article

  • Caveats with the runAllManagedModulesForAllRequests in IIS 7/8

    - by Rick Strahl
    One of the nice enhancements in IIS 7 (and now 8) is the ability to be able to intercept non-managed - ie. non ASP.NET served - requests from within ASP.NET managed modules. This opened up a ton of new functionality that could be applied across non-managed content using .NET code. I thought I had a pretty good handle on how IIS 7's Integrated mode pipeline works, but when I put together some samples last tonight I realized that the way that managed and unmanaged requests fire into the pipeline is downright confusing especially when it comes to the runAllManagedModulesForAllRequests attribute. There are a number of settings that can affect whether a managed module receives non-ASP.NET content requests such as static files or requests from other frameworks like PHP or ASP classic, and this is topic of this blog post. Native and Managed Modules The integrated mode IIS pipeline for IIS 7 and later - as the name suggests - allows for integration of ASP.NET pipeline events in the IIS request pipeline. Natively IIS runs unmanaged code and there are a host of native mode modules that handle the core behavior of IIS. If you set up a new IIS site or application without managed code support only the native modules are supported and fired without any interaction between native and managed code. If you use the Integrated pipeline with managed code enabled however things get a little more confusing as there both native modules and .NET managed modules can fire against the same IIS request. If you open up the IIS Modules dialog you see both managed and unmanaged modules. Unmanaged modules point at physical files on disk, while unmanaged modules point at .NET types and files referenced from the GAC or the current project's BIN folder. Both native and managed modules can co-exist and execute side by side and on the same request. When running in IIS 7 the IIS pipeline actually instantiates a the ASP.NET  runtime (via the System.Web.PipelineRuntime class) which unlike the core HttpRuntime classes in ASP.NET receives notification callbacks when IIS integrated mode events fire. The IIS pipeline is smart enough to detect whether managed handlers are attached and if they're none these notifications don't fire, improving performance. The good news about all of this for .NET devs is that ASP.NET style modules can be used for just about every kind of IIS request. All you need to do is create a new Web Application and enable ASP.NET on it, and then attach managed handlers. Handlers can look at ASP.NET content (ie. ASPX pages, MVC, WebAPI etc. requests) as well as non-ASP.NET content including static content like HTML files, images, javascript and css resources etc. It's very cool that this capability has been surfaced. However, with that functionality comes a lot of responsibility. Because every request passes through the ASP.NET pipeline if managed modules (or handlers) are attached there are possible performance implications that come with it. Running through the ASP.NET pipeline does add some overhead. ASP.NET and Your Own Modules When you create a new ASP.NET project typically the Visual Studio templates create the modules section like this: <system.webServer> <validation validateIntegratedModeConfiguration="false" /> <modules runAllManagedModulesForAllRequests="true" > </modules> </system.webServer> Specifically the interesting thing about this is the runAllManagedModulesForAllRequest="true" flag, which seems to indicate that it controls whether any registered modules always run, even when the value is set to false. Realistically though this flag does not control whether managed code is fired for all requests or not. Rather it is an override for the preCondition flag on a particular handler. With the flag set to the default true setting, you can assume that pretty much every IIS request you receive ends up firing through your ASP.NET module pipeline and every module you have configured is accessed even by non-managed requests like static files. In other words, your module will have to handle all requests. Now so far so obvious. What's not quite so obvious is what happens when you set the runAllManagedModulesForAllRequest="false". You probably would expect that immediately the non-ASP.NET requests no longer get funnelled through the ASP.NET Module pipeline. But that's not what actually happens. For example, if I create a module like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" /> by default it will fire against ALL requests regardless of the runAllManagedModulesForAllRequests flag. Even if the value runAllManagedModulesForAllRequests="false", the module is fired. Not quite expected. So what is the runAllManagedModulesForAllRequests really good for? It's essentially an override for managedHandler preCondition. If I declare my handler in web.config like this:<add name="SharewareModule" type="HowAspNetWorks.SharewareMessageModule" preCondition="managedHandler" /> and the runAllManagedModulesForAllRequests="false" my module only fires against managed requests. If I switch the flag to true, now my module ends up handling all IIS requests that are passed through from IIS. The moral of the story here is that if you intend to only look at ASP.NET content, you should always set the preCondition="managedHandler" attribute to ensure that only managed requests are fired on this module. But even if you do this, realize that runAllManagedModulesForAllRequests="true" can override this setting. runAllManagedModulesForAllRequests and Http Application Events Another place the runAllManagedModulesForAllRequest attribute affects is the Global Http Application object (typically in global.asax) and the Application_XXXX events that you can hook up there. So while the events there are dynamically hooked up to the application class, they basically behave as if they were set with the preCodition="managedHandler" configuration switch. The end result is that if you have runAllManagedModulesForAllRequests="true" you'll see every Http request passed through the Application_XXXX events, and you only see ASP.NET requests with the flag set to "false". What's all that mean? Configuring an application to handle requests for both ASP.NET and other content requests can be tricky especially if you need to mix modules that might require both. Couple of things are important to remember. If your module doesn't need to look at every request, by all means set a preCondition="managedHandler" on it. This will at least allow it to respond to the runAllManagedModulesForAllRequests="false" flag and then only process ASP.NET requests. Look really carefully to see whether you actually need runAllManagedModulesForAllRequests="true" in your applications as set by the default new project templates in Visual Studio. Part of the reason, this is the default because it was required for the initial versions of IIS 7 and ASP.NET 2 in order to handle MVC extensionless URLs. However, if you are running IIS 7 or later and .NET 4.0 you can use the ExtensionlessUrlHandler instead to allow you MVC functionality without requiring runAllManagedModulesForAllRequests="true": <handlers> <remove name="ExtensionlessUrlHandler-Integrated-4.0" /> <add name="ExtensionlessUrlHandler-Integrated-4.0" path="*." verb="GET,HEAD,POST,DEBUG,PUT,DELETE,PATCH,OPTIONS" type="System.Web.Handlers.TransferRequestHandler" preCondition="integratedMode,runtimeVersionv4.0" /> </handlers> Oddly this is the default for Visual Studio 2012 MVC template apps, so I'm not sure why the default template still adds runAllManagedModulesForAllRequests="true" is - it should be enabled only if there's a specific need to access non ASP.NET requests. As a side note, it's interesting that when you access a static HTML resource, you can actually write into the Response object and get the output to show, which is trippy. I haven't looked closely to see how this works - whether ASP.NET just fires directly into the native output stream or whether the static requests are re-routed directly through the ASP.NET pipeline once a managed code module is detected. This doesn't work for all non ASP.NET resources - for example, I can't do the same with ASP classic requests, but it makes for an interesting demo when injecting HTML content into a static HTML page :-) Note that on the original Windows Server 2008 and Vista (IIS 7.0) you might need a HotFix in order for ExtensionLessUrlHandler to work properly for MVC projects. On my live server I needed it (about 6 months ago), but others have observed that the latest service updates have integrated this functionality and the hotfix is not required. On IIS 7.5 and later I've not needed any patches for things to just work. Plan for non-ASP.NET Requests It's important to remember that if you write a .NET Module to run on IIS 7, there's no way for you to prevent non-ASP.NET requests from hitting your module. So make sure you plan to support requests to extensionless URLs, to static resources like files. Luckily ASP.NET creates a full Request and full Response object for you for non ASP.NET content. So even for static files and even for ASP classic for example, you can look at Request.FilePath or Request.ContentType (in post handler pipeline events) to determine what content you are dealing with. As always with Module design make sure you check for the conditions in your code that make the module applicable and if a filter fails immediately exit - minimize the code that runs if your module doesn't need to process the request.© Rick Strahl, West Wind Technologies, 2005-2012Posted in IIS7   ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • SATA errors reported during boot: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0

    - by digby280
    I have noticed some error during the Linux boot. They seem to continue to occur after the boot adding lines to the log every few seconds. Once booted this normally does not appear to be causing any problems. However, around 1 in 10 boots results in a kernel panic and the computer has on two or three occasions suddenly rebooted after being powered on for a number of hours. I presume the cause of the reboot is a kernel panic as well. I am running Ubuntu 11.10 and I have had Ubuntu installed on the computer for around a year. I have googled around and not found anything useful. I have provided the kernel log lines and the output of smartctl. Can anyone explain exactly what these errors mean, or better still how to resolve them? Apr 2 16:51:27 dell580 kernel: [ 19.831140] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:27 dell580 kernel: [ 19.934194] tg3 0000:03:00.0: eth0: Link is down Apr 2 16:51:28 dell580 kernel: [ 20.929468] tg3 0000:03:00.0: eth0: Link is up at 100 Mbps, full duplex Apr 2 16:51:28 dell580 kernel: [ 20.929471] tg3 0000:03:00.0: eth0: Flow control is on for TX and on for RX Apr 2 16:51:28 dell580 kernel: [ 20.929727] ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Apr 2 16:51:29 dell580 kernel: [ 21.609381] EXT4-fs (sdb2): re-mounted. Opts: errors=remount-ro,user_xattr,commit=0 Apr 2 16:51:29 dell580 kernel: [ 21.616515] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:29 dell580 kernel: [ 21.616519] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:29 dell580 kernel: [ 21.616525] ata2.00: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 21.934036] ata2.01: hard resetting link Apr 2 16:51:29 dell580 kernel: [ 22.408890] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.408907] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:29 dell580 kernel: [ 22.440934] ata2.00: configured for UDMA/100 Apr 2 16:51:29 dell580 kernel: [ 22.449040] ata2.01: configured for UDMA/133 Apr 2 16:51:29 dell580 kernel: [ 22.449818] ata2: EH complete Apr 2 16:51:33 dell580 kernel: [ 26.122664] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:33 dell580 kernel: [ 26.122670] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:33 dell580 kernel: [ 26.122677] ata2.00: hard resetting link Apr 2 16:51:33 dell580 kernel: [ 26.442684] ata2.01: hard resetting link Apr 2 16:51:34 dell580 kernel: [ 26.925545] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.925561] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:34 dell580 kernel: [ 26.961542] ata2.00: configured for UDMA/100 Apr 2 16:51:34 dell580 kernel: [ 26.969616] ata2.01: configured for UDMA/133 Apr 2 16:51:34 dell580 kernel: [ 26.970400] ata2: EH complete Apr 2 16:51:35 dell580 kernel: [ 28.111180] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:35 dell580 kernel: [ 28.111184] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:35 dell580 kernel: [ 28.111191] ata2.00: hard resetting link Apr 2 16:51:35 dell580 kernel: [ 28.429674] ata2.01: hard resetting link Apr 2 16:51:36 dell580 kernel: [ 28.904557] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.904572] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:36 dell580 kernel: [ 28.936609] ata2.00: configured for UDMA/100 Apr 2 16:51:36 dell580 kernel: [ 28.944692] ata2.01: configured for UDMA/133 Apr 2 16:51:36 dell580 kernel: [ 28.945464] ata2: EH complete Apr 2 16:51:38 dell580 kernel: [ 31.581756] eth0: no IPv6 routers present Apr 2 16:51:38 dell580 kernel: [ 32.103066] ata2.01: exception Emask 0x40 SAct 0x0 SErr 0x80800 action 0x0 Apr 2 16:51:38 dell580 kernel: [ 32.103074] ata2.01: SError: { HostInt 10B8B } Apr 2 16:51:38 dell580 kernel: [ 32.103085] ata2.00: hard resetting link Apr 2 16:51:38 dell580 kernel: [ 32.419669] ata2.01: hard resetting link Apr 2 16:51:39 dell580 kernel: [ 32.894518] ata2.00: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.894533] ata2.01: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Apr 2 16:51:39 dell580 kernel: [ 32.926536] ata2.00: configured for UDMA/100 Apr 2 16:51:39 dell580 kernel: [ 32.934715] ata2.01: configured for UDMA/133 Apr 2 16:51:39 dell580 kernel: [ 32.935578] ata2: EH complete Here's the output of smartctl for the drive. smartctl 5.41 2011-06-09 r3365 [x86_64-linux-3.0.0-17-generic] (local build) Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Model Family: SAMSUNG SpinPoint F1 DT Device Model: SAMSUNG HD103UJ Serial Number: S13PJ90QC19706 LU WWN Device Id: 5 0000f0 00b1c7960 Firmware Version: 1AA01113 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 3b Local Time is: Mon Apr 2 17:13:48 2012 BST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 41) The self-test routine was interrupted by the host with a hard or soft reset. Total time to complete Offline data collection: (11772) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 197) minutes. Conveyance self-test routine recommended polling time: ( 21) minutes. SCT capabilities: (0x003f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0007 076 076 011 Pre-fail Always - 7940 4 Start_Stop_Count 0x0032 099 099 000 Old_age Always - 521 5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 253 253 051 Pre-fail Always - 0 8 Seek_Time_Performance 0x0025 100 100 015 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 100 100 000 Old_age Always - 642 10 Spin_Retry_Count 0x0033 100 100 051 Pre-fail Always - 0 11 Calibration_Retry_Count 0x0012 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 482 13 Read_Soft_Error_Rate 0x000e 100 100 000 Old_age Always - 0 183 Runtime_Bad_Block 0x0032 100 100 000 Old_age Always - 759 184 End-to-End_Error 0x0033 100 100 000 Pre-fail Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 073 069 000 Old_age Always - 27 (Min/Max 16/27) 194 Temperature_Celsius 0x0022 073 067 000 Old_age Always - 27 (Min/Max 16/28) 195 Hardware_ECC_Recovered 0x001a 100 100 000 Old_age Always - 320028 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 099 099 000 Old_age Always - 1494 200 Multi_Zone_Error_Rate 0x000a 100 100 000 Old_age Always - 0 201 Soft_Read_Error_Rate 0x000a 253 253 000 Old_age Always - 0 SMART Error Log Version: 1 ATA Error Count: 211 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 211 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 0f 31 63 8f e1 Error: ICRC, ABRT 15 sectors at LBA = 0x018f6331 = 26174257 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 40 62 8f e1 08 00:01:00.460 READ DMA c8 00 20 00 7c 30 e0 08 00:01:00.450 READ DMA c8 00 00 10 49 8f e1 08 00:01:00.440 READ DMA c8 00 e0 20 d0 30 e0 08 00:01:00.420 READ DMA c8 00 00 c0 59 90 e1 08 00:01:00.400 READ DMA Error 210 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 cf e9 cf 66 e0 Error: ICRC, ABRT 207 sectors at LBA = 0x0066cfe9 = 6737897 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 b8 cf 66 e0 08 00:08:29.780 READ DMA c8 00 60 60 c9 18 e0 08 00:08:29.770 READ DMA c8 00 40 20 c9 18 e0 08 00:08:29.770 READ DMA c8 00 20 00 c9 18 e0 08 00:08:29.760 READ DMA c8 00 20 98 cf 66 e0 08 00:08:29.750 READ DMA Error 209 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 2f d1 74 e0 e0 Error: ICRC, ABRT 47 sectors at LBA = 0x00e074d1 = 14709969 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 00 00 74 e0 e0 08 00:00:30.940 READ DMA c8 00 20 18 36 de e0 08 00:00:30.930 READ DMA c8 00 08 48 f1 dd e0 08 00:00:30.930 READ DMA c8 00 08 a8 f0 dd e0 08 00:00:30.930 READ DMA c8 00 08 90 f0 dd e0 08 00:00:30.930 READ DMA Error 208 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 7f 21 88 9d e0 Error: ICRC, ABRT 127 sectors at LBA = 0x009d8821 = 10324001 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- c8 00 a0 00 88 9d e0 08 00:00:27.610 READ DMA c8 00 58 a8 e7 9c e0 08 00:00:27.610 READ DMA c8 00 00 28 e6 9c e0 08 00:00:27.610 READ DMA c8 00 00 e0 e4 9c e0 08 00:00:27.610 READ DMA c8 00 00 90 e0 9c e0 08 00:00:27.600 READ DMA Error 207 occurred at disk power-on lifetime: 0 hours (0 days + 0 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 04 51 26 6a 6a c3 e0 Error: ABRT at LBA = 0x00c36a6a = 12806762 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- ca 00 00 90 69 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 68 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 50 65 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 d0 64 c3 e0 08 00:29:39.350 WRITE DMA ca 00 40 90 63 c3 e0 08 00:29:39.350 WRITE DMA SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Interrupted (host reset) 90% 638 - # 2 Short offline Interrupted (host reset) 90% 638 - # 3 Extended offline Interrupted (host reset) 90% 638 - # 4 Short offline Interrupted (host reset) 90% 638 - # 5 Extended offline Interrupted (host reset) 90% 638 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.

    Read the article

  • CodePlex Daily Summary for Monday, August 27, 2012

    CodePlex Daily Summary for Monday, August 27, 2012Popular ReleasesHome Access Plus+: v8.0: v8.0827.1800 RELEASE CHANGED TO BETA Any issues, please log them on http://www.edugeek.net/forums/home-access-plus/ This is full release, NO upgrade ZIP will be provided as most files require replacing. To upgrade from a previous version, delete everything but your AppData folder, extract all but the AppData folder and run your HAP+ install Documentation is supplied in the Web Zip The Quota Services require executing a script to register the service, this can be found in there install di...Math.NET Numerics: Math.NET Numerics v2.2.0: Major linear algebra rework since v2.1, now available on Codeplex as well (previous versions were only available via NuGet). Also available as NuGet packages: PM> Install-Package MathNet.Numerics PM> Install-Package MathNet.Numerics.FSharp New: instead of the special Silverlight build we now provide a portable version supporting .Net 4, Silverlight 5 and .Net Core (WinRT) 4.5: PM> Install-Package MathNet.Numerics.PortableMetodología General Ajustada - MGA: 03.00.08: Cambios Aury: Cambios realizados en el reporte de la MGA: Errores en la generación cuando se seleccionan todas las opciones y Que sólo se imprima la información de la alternativa seleccionada para el proyecto. Cambios John: Integración de código con cambios enviados por Aury Niño. Generación de instaladores. Soporte técnico por correo electrónico y telefónico.Phalanger - The PHP Language Compiler for the .NET Framework: 3.0.0.3391 (September 2012): New features: Extended ReflectionClass libxml error handling, constants TreatWarningsAsErrors MSBuild option OnlyPrecompiledCode configuration option; allows to use only compiled code Fixes: ArgsAware exception fix accessing .NET properties bug fix ASP.NET session handler fix for OutOfProc mode Phalanger Tools for Visual Studio: Visual Studio 2010 & 2012 New debugger engine, PHP-like debugging Lot of fixes of project files, formatting, smart indent, colorization etc. Improved ...WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.06: Whats New Added CKEditor 3.6.4 oEmbed Plugin can now handle short urls changes The Template File can now parsed from an xml file instead of js (More Info...) Style Sets can now parsed from an xml file instead of js (More Info...) Fixed Showing wrong Pages in Child Portal in the Link Dialog Fixed Urls in dnnpages Plugin Fixed Issue #6969 WordCount Plugin Fixed Issue #6973 File-Browser: Fixed Deleting of Files File-Browser: Improved loading time File-Browser: Improved the loa...MabiCommerce: MabiCommerce 1.0.1: What's NewSetup now creates shortcuts Fix spelling errors Minor enhancement to the Map window.ScintillaNET: ScintillaNET 2.5.2: This release has been built from the 2.5 branch. Version 2.5.2 is functionally identical to the 2.5.1 release but also includes the XML documentation comments file generated by Visual Studio. It is not 100% comprehensive but it will give you Visual Studio IntelliSense for a large part of the API. Just make sure the ScintillaNET.xml file is in the same folder as the ScintillaNET.dll reference you're using in your projects. (The XML file does not need to be distributed with your application)....TouchInjector: TouchInjector 1.1: Version 1.1: fixed a bug with the autorun optionWinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...BlackJumboDog: Ver5.7.1: 2012.08.25 Ver5.7.1 (1)?????·?????LING?????????????? (2)SMTP???(????)????、?????\?????????????????????Christoc's DotNetNuke Module Development Template: 00.00.09 for DNN6: Probably the final VS2010 Release as I have some new VS2012 templates coming. BEFORE USE YOU need to install the MSBuild Community Tasks available from https://github.com/loresoft/msbuildtasks/downloads For best results you should configure your development environment as described in this blog post Then read this latest blog post about customizing and using these custom templates. Installation is simple To use this template place the ZIP (not extracted) file in your My Documents\Visual ...Visual Studio Team Foundation Server Branching and Merging Guide: v2 - Visual Studio 2012: Welcome to the Branching and Merging Guide Quality-Bar Details Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review Documentation has been reviewed by the quality and recording team All critical bugs have been resolved Known Issues / Bugs Spelling, grammar and content revisions are in progress. Hotfix will be published.Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.62: Fix for issue #18525 - escaped characters in CSS identifiers get double-escaped if the character immediately after the backslash is not normally allowed in an identifier. fixed symbol problem with nuget package. 4.62 should have nuget symbols available again. Also want to highlight again the breaking change introduced in 4.61 regarding the renaming of the DLL from AjaxMin.dll to AjaxMinLibrary.dll to fix strong-name collisions between the DLL and the EXE. Please be aware of this change and...nopCommerce. Open source shopping cart (ASP.NET MVC): nopcommerce 2.65: As some of you may know we were planning to release version 2.70 much later (the end of September). But today we have to release this intermediate version (2.65). It fixes a critical issue caused by a third-party assembly when running nopCommerce on a server with .NET 4.5 installed. No major features have been introduced with this release as our development efforts were focused on further enhancements and fixing bugs. To see the full list of fixes and changes please visit the release notes p...MyRouter (Virtual WiFi Router): MyRouter 1.2.9: . Fix: Some missing changes for fixing the window subclassing crash. · Fix: fixed bug when Run MyRouter at the first Time. · Fix: Log File · Fix: improve performance speed application · fix: solve some Exception.TFS Project Test Migrator: TestPlanMigration v1.0.0: Release 1.0.0 This first version do not create the test cases in the target project because the goal was to restore a Test Plan + Test Suite hierarchy after a manual user deletion without restoring all the Project Collection Database. As I discovered, deleting a Test Plan will do the following : - Delete all TestSuiteEntry (the link between a Test Suite node and a Test Case) - Delete all TestSuite (the nodes in the test hierarchy), including root TestSuite - Delete the TestPlan Test c...ERPStore eCommerce FrontOffice: ERPStore.Core V4.0.0.2 MVC4 RTM: ERPStore.Core V4.0.0.2 MVC4 RTM (Code Source)ZXing.Net: ZXing.Net 0.8.0.0: sync with rev. 2393 of the java version improved API, direct support for multiple barcode decoding, wrapper for barcode generating many other improvements and fixes encoder and decoder command line clients demo client for emguCV dev documentation startedMFCMAPI: August 2012 Release: Build: 15.0.0.1035 Full release notes at SGriffin's blog. If you just want to run the MFCMAPI or MrMAPI, get the executables. If you want to debug them, get the symbol files and the source. The 64 bit builds will only work on a machine with Outlook 2010 64 bit installed. All other machines should use the 32 bit builds, regardless of the operating system. Facebook BadgePulse: Pulse Beta 5: Whats new in this release? Well to start with we now have Wallbase.cc Authentication! so you can access favorites or NSFW. This version requires .NET 4.0, you probably already have it, but if you don't it's a free and easy download from Microsoft. Pulse can bet set to start on Windows startup now too. The Wallpaper setter has settings now, so you can change the background color of the desktop and the Picture Position (Tile/Center/Fill/etc...) I've switched to Windows Forms instead of WPF...New ProjectsAStar Sample WPF Application by Ben Scharbach: The A* Pathfinding component contains an A* Manager, with three A* path finding engines, allowing for 3 simultaneous path searches on PC and XBOX. - By BenContactor: Programs and Schematics for sending a Short Message Service from a computer.JCI Page: gfdsgfdsgfdsL-Calc: Przykladowy kalkulator liczacy indukcyjnosc poprzez odczyt czestotliwosci obwodu rezonansowego.LibXmlSocket: XmlSocket LibraryNavegar: WPF Navigation pour MVVM Light et SimpleIocNUnit Comparisons: A set of libraries which extend the NUnit constraint framework to support deeper comparisons and report detailed differences useful to test debugging. ORMAC: ORMAC is a micro .NET ORM PersonalDataCenter: PDCNet1Project WarRoom: An experimental chatroom jQuery widget for instant messaging functionality on web applications.Quantum.Net: Quantum.Net is a free computation library developp in C#. Contains some mathematics functions and physical element.Specification Framework: A small framework to get started with a type safe variation of the specification pattern.Sphere Community: Sphere Community Pack 2.0TouchInjector: Generate Windows 8 Touch from TUIO messages.

    Read the article

  • Oracle Solaris 11 ZFS Lab for Openworld 2012

    - by user12626122
    Preface This is the content from the Oracle Openworld 2012 ZFS lab. It was well attended - the feedback was that it was a little short - thats probably because in writing it I bacame very time-concious after the ASM/ACFS on Solaris extravaganza I ran last year which was almost too long for mortal man to finish in the 1 hour session. Enjoy. Table of Contents Exercise Z.1: ZFS Pools Exercise Z.2: ZFS File Systems Exercise Z.3: ZFS Compression Exercise Z.4: ZFS Deduplication Exercise Z.5: ZFS Encryption Exercise Z.6: Solaris 11 Shadow Migration Introduction This set of exercises is designed to briefly demonstrate new features in Solaris 11 ZFS file system: Deduplication, Encryption and Shadow Migration. Also included is the creation of zpools and zfs file systems - the basic building blocks of the technology, and also Compression which is the compliment of Deduplication. The exercises are just introductions - you are referred to the ZFS Adminstration Manual for further information. From Solaris 11 onward the online manual pages consist of zpool(1M) and zfs(1M) with further feature-specific information in zfs_allow(1M), zfs_encrypt(1M) and zfs_share(1M). The lab is easily carried out in a VirtualBox running Solaris 11 with 6 virtual 3 Gb disks to play with. Exercise Z.1: ZFS Pools Task: You have several disks to use for your new file system. Create a new zpool and a file system within it. Lab: You will check the status of existing zpools, create your own pool and expand it. Your Solaris 11 installation already has a root ZFS pool. It contains the root file system. Check this: root@solaris:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 15.9G 6.62G 9.25G 41% 1.00x ONLINE - root@solaris:~# zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 errors: No known data errors Note the disk device the root pool is on - c3t0d0s0 Now you will create your own ZFS pool. First you will check what disks are available: root@solaris:~# echo | format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c3t0d0 <ATA-VBOX HARDDISK-1.0 cyl 2085 alt 2 hd 255 sec 63> /pci@0,0/pci8086,2829@d/disk@0,0 1. c3t2d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@2,0 2. c3t3d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@3,0 3. c3t4d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@4,0 4. c3t5d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@5,0 5. c3t6d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@6,0 6. c3t7d0 <ATA-VBOX HARDDISK-1.0 cyl 1534 alt 2 hd 128 sec 32> /pci@0,0/pci8086,2829@d/disk@7,0 Specify disk (enter its number): Specify disk (enter its number): The root disk is numbered 0. The others are free for use. Try creating a simple pool and observe the error message: root@solaris:~# zpool create mypool c3t2d0 c3t3d0 'mypool' successfully created, but with no redundancy; failure of one device will cause loss of the pool So destroy that pool and create a mirrored pool instead: root@solaris:~# zpool destroy mypool root@solaris:~# zpool create mypool mirror c3t2d0 c3t3d0 root@solaris:~# zpool status mypool pool: mypool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM mypool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c3t2d0 ONLINE 0 0 0 c3t3d0 ONLINE 0 0 0 errors: No known data errors Back to topExercise Z.2: ZFS File Systems Task: You have to create file systems for later exercises. You can see that when a pool is created, a file system of the same name is created: root@solaris:~# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 86.5K 2.94G 31K /mypool Create your filesystems and mountpoints as follows: root@solaris:~# zfs create -o mountpoint=/data1 mypool/mydata1 The -o option sets the mount point and automatically creates the necessary directory. root@solaris:~# zfs list mypool/mydata1 NAME USED AVAIL REFER MOUNTPOINT mypool/mydata1 31K 2.94G 31K /data1 Back to top Exercise Z.3: ZFS Compression Task:Try out different forms of compression available in ZFS Lab:Create 2nd filesystem with compression, fill both file systems with the same data, observe results You can see from the zfs(1) manual page that there are several types of compression available to you, set with the property=value syntax: compression=on | off | lzjb | gzip | gzip-N | zle Controls the compression algorithm used for this dataset. The lzjb compression algorithm is optimized for performance while providing decent data compression. Setting compression to on uses the lzjb compression algorithm. The gzip compression algorithm uses the same compression as the gzip(1) command. You can specify the gzip level by using the value gzip-N where N is an integer from 1 (fastest) to 9 (best compression ratio). Currently, gzip is equivalent to gzip-6 (which is also the default for gzip(1)). Create a second filesystem with compression turned on. Note how you set and get your values separately: root@solaris:~# zfs create -o mountpoint=/data2 mypool/mydata2 root@solaris:~# zfs set compression=gzip-9 mypool/mydata2 root@solaris:~# zfs get compression mypool/mydata1 NAME PROPERTY VALUE SOURCE mypool/mydata1 compression off default root@solaris:~# zfs get compression mypool/mydata2 NAME PROPERTY VALUE SOURCE mypool/mydata2 compression gzip-9 local Now you can copy the contents of /usr/lib into both your normal and compressing filesystem and observe the results. Don't forget the dot or period (".") in the find(1) command below: root@solaris:~# cd /usr/lib root@solaris:/usr/lib# find . -print | cpio -pdv /data1 root@solaris:/usr/lib# find . -print | cpio -pdv /data2 The copy into the compressing file system takes longer - as it has to perform the compression but the results show the effect: root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.35G 1.59G 31K /mypool mypool/mydata1 1.01G 1.59G 1.01G /data1 mypool/mydata2 341M 1.59G 341M /data2 Note that the available space in the pool is shared amongst the file systems. This behavior can be modified using quotas and reservations which are not covered in this lab but are covered extensively in the ZFS Administrators Guide. Back to top Exercise Z.4: ZFS Deduplication The deduplication property is used to remove redundant data from a ZFS file system. With the property enabled duplicate data blocks are removed synchronously. The result is that only unique data is stored and common componenents are shared. Task:See how to implement deduplication and its effects Lab: You will create a ZFS file system with deduplication turned on and see if it reduces the amount of physical storage needed when we again fill it with a copy of /usr/lib. root@solaris:/usr/lib# zfs destroy mypool/mydata2 root@solaris:/usr/lib# zfs set dedup=on mypool/mydata1 root@solaris:/usr/lib# rm -rf /data1/* root@solaris:/usr/lib# mkdir /data1/2nd-copy root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02M 2.94G 31K /mypool mypool/mydata1 43K 2.94G 43K /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1 2142768 blocks root@solaris:/usr/lib# zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.02G 1.99G 31K /mypool mypool/mydata1 1.01G 1.99G 1.01G /data1 root@solaris:/usr/lib# find . -print | cpio -pd /data1/2nd-copy 2142768 blocks root@solaris:/usr/lib#zfs list NAME USED AVAIL REFER MOUNTPOINT mypool 1.99G 1.96G 31K /mypool mypool/mydata1 1.98G 1.96G 1.98G /data1 You could go on creating copies for quite a while...but you get the idea. Note that deduplication and compression can be combined: the compression acts on metadata. Deduplication works across file systems in a pool and there is a zpool-wide property dedupratio: root@solaris:/usr/lib# zpool get dedupratio mypool NAME PROPERTY VALUE SOURCE mypool dedupratio 4.30x - Deduplication can also be checked using "zpool list": root@solaris:/usr/lib# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT mypool 2.98G 1001M 2.01G 32% 4.30x ONLINE - rpool 15.9G 6.66G 9.21G 41% 1.00x ONLINE - Before moving on to the next topic, destroy that dataset and free up some space: root@solaris:~# zfs destroy mypool/mydata1 Back to top Exercise Z.5: ZFS Encryption Task: Encrypt sensitive data. Lab: Explore basic ZFS encryption. This lab only covers the basics of ZFS Encryption. In particular it does not cover various aspects of key management. Please see the ZFS Adminastrion Manual and the zfs_encrypt(1M) manual page for more detail on this functionality. Back to top root@solaris:~# zfs create -o encryption=on mypool/data2 Enter passphrase for 'mypool/data2': ******** Enter again: ******** root@solaris:~# Creation of a descendent dataset shows that encryption is inherited from the parent: root@solaris:~# zfs create mypool/data2/data3 root@solaris:~# zfs get -r encryption,keysource,keystatus,checksum mypool/data2 NAME PROPERTY VALUE SOURCE mypool/data2 encryption on local mypool/data2 keysource passphrase,prompt local mypool/data2 keystatus available - mypool/data2 checksum sha256-mac local mypool/data2/data3 encryption on inherited from mypool/data2 mypool/data2/data3 keysource passphrase,prompt inherited from mypool/data2 mypool/data2/data3 keystatus available - mypool/data2/data3 checksum sha256-mac inherited from mypool/data2 You will find the online manual page zfs_encrypt(1M) contains examples. In particular, if time permits during this lab session you may wish to explore the changing of a key using "zfs key -c mypool/data2". Exercise Z.6: Shadow Migration Shadow Migration allows you to migrate data from an old file system to a new file system while simultaneously allowing access and modification to the new file system during the process. You can use Shadow Migration to migrate a local or remote UFS or ZFS file system to a local file system. Task: You wish to migrate data from one file system (UFS, ZFS, VxFS) to ZFS while mainaining access to it. Lab: Create the infrastructure for shadow migration and transfer one file system into another. First create the file system you want to migrate root@solaris:~# zpool create oldstuff c3t4d0 root@solaris:~# zfs create oldstuff/forgotten Then populate it with some files: root@solaris:~# cd /var/adm root@solaris:/var/adm# find . -print | cpio -pdv /oldstuff/forgotten You need the shadow-migration package installed: root@solaris:~# pkg install shadow-migration Packages to install: 1 Create boot environment: No Create backup boot environment: No Services to change: 1 DOWNLOAD PKGS FILES XFER (MB) Completed 1/1 14/14 0.2/0.2 PHASE ACTIONS Install Phase 39/39 PHASE ITEMS Package State Update Phase 1/1 Image State Update Phase 2/2 You then enable the shadowd service: root@solaris:~# svcadm enable shadowd root@solaris:~# svcs shadowd STATE STIME FMRI online 7:16:09 svc:/system/filesystem/shadowd:default Set the filesystem to be migrated to read-only root@solaris:~# zfs set readonly=on oldstuff/forgotten Create a new zfs file system with the shadow property set to the file system to be migrated: root@solaris:~# zfs create -o shadow=file:///oldstuff/forgotten mypool/remembered Use the shadowstat(1M) command to see the progress of the migration: root@solaris:~# shadowstat EST BYTES BYTES ELAPSED DATASET XFRD LEFT ERRORS TIME mypool/remembered 92.5M - - 00:00:59 mypool/remembered 99.1M 302M - 00:01:09 mypool/remembered 109M 260M - 00:01:19 mypool/remembered 133M 304M - 00:01:29 mypool/remembered 149M 339M - 00:01:39 mypool/remembered 156M 86.4M - 00:01:49 mypool/remembered 156M 8E 29 (completed) Note that if you had created /mypool/remembered as encrypted, this would be the preferred method of encrypting existing data. Similarly for compressing or deduplicating existing data. The procedure for migrating a file system over NFS is similar - see the ZFS Administration manual. That concludes this lab session.

    Read the article

  • Ubuntu 12.04 + Raid0 + Windows 7 not loading

    - by Douglas
    please someone help me.... (Sorry for my english) Hi, I have a Pc with 2 Hd (1Tb each) on Raid0. I had a Windows 7 64bits working for several months. When I installed the Windows I let a 100Gb partition empty to install Ubuntu someday. I was using Linux on a Virtualbox, but this week I tried to install Ubuntu 12.04 in this 100Gb partition. I used the Ubuntu alternate cd, because the 'normal' cd was giving me trouble with the Raid0. The grub installation always reported a error. After a lot of work I found that I nedded to install grub on partition /dev/mapper/isw_chjbfeec_DougRaid1 (see Bootinfo below). The Windows installation created a 100Mb boot partition, so I needed to install grub in this partition. Now I have the Ubuntu working 100% ok. The problem is, the Windows is not booting! The windows option is present on the grub menu, but when I choose the windows option there is a black screen and after that the grub is reloaded. My Bootinfo is: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== => Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for /boot/grub. => Grub2 (v1.99) is installed in the MBR of /dev/mapper/isw_chjbfeec_DougRaid and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks in partition 1 for /boot/grub. sda1: __________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unknown filesystem type '' sda2: __________________________________________________________________________ File system: Boot sector type: Unknown Boot sector info: Mounting failed: mount: unknown filesystem type '' mount: unknown filesystem type '' sda3: __________________________________________________________________________ File system: Extended Partition Boot sector type: Unknown Boot sector info: isw_chjbfeec_DougRaid1: ________________________________________________________ File system: ntfs Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of isw_chjbfeec_DougRaid1 and looks at sector 3841862992 of the same hard drive for core.img. core.img is at this location and looks for (,msdos5)/boot/grub on this drive. No errors found in the Boot Parameter Block. Operating System: Boot files: /grldr /bootmgr /Boot/BCD /grldr isw_chjbfeec_DougRaid2: ________________________________________________________ File system: ntfs Boot sector type: Windows Vista/7: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows 7 Boot files: /Windows/System32/winload.exe isw_chjbfeec_DougRaid3: ________________________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: isw_chjbfeec_DougRaid5: ________________________________________________________ File system: ext4 Boot sector type: - Boot sector info: Operating System: Ubuntu 12.04 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/grub/core.img isw_chjbfeec_DougRaid6: ________________________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _____________________________________________________________________ Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/sda2 206,848 3,686,402,047 3,686,195,200 7 NTFS / exFAT / HPFS /dev/sda3 3,686,402,558 3,907,039,743 220,637,186 5 Extended Invalid MBR Signature found. EBR refers to a location outside the hard drive. /dev/sda2 ends after the last sector of /dev/sda /dev/sda3 ends after the last sector of /dev/sda Drive: isw_chjbfeec_DougRaid _____________________________________________________________________ Disk /dev/mapper/isw_chjbfeec_DougRaid: 2000.4 GB, 2000404348928 bytes 255 heads, 63 sectors/track, 243201 cylinders, total 3907039744 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/mapper/isw_chjbfeec_DougRaid1 * 2,048 206,847 204,800 7 NTFS / exFAT / HPFS /dev/mapper/isw_chjbfeec_DougRaid2 206,848 3,686,402,047 3,686,195,200 7 NTFS / exFAT / HPFS /dev/mapper/isw_chjbfeec_DougRaid3 3,686,402,558 3,907,039,743 220,637,186 5 Extended /dev/mapper/isw_chjbfeec_DougRaid5 3,686,402,560 3,881,876,479 195,473,920 83 Linux /dev/mapper/isw_chjbfeec_DougRaid6 3,881,876,992 3,907,039,743 25,162,752 82 Linux swap / Solaris "blkid" output: ________________________________________________________________ Device UUID TYPE LABEL /dev/mapper/isw_chjbfeec_DougRaid1 C89C73D19C73B910 ntfs Reservado pelo Sistema /dev/mapper/isw_chjbfeec_DougRaid2 6830883A3088116C ntfs /dev/mapper/isw_chjbfeec_DougRaid5 bbab868a-ea53-4be3-ba7d-2737fe6cb24c ext4 /dev/mapper/isw_chjbfeec_DougRaid6 7a830a3c-88fb-4cba-80dc-f32e08abfd5b swap /dev/sda isw_raid_member /dev/sdb isw_raid_member /dev/sr0 iso9660 Windows7x86x64SK ========================= "ls -R /dev/mapper/" output: ========================= /dev/mapper: control isw_chjbfeec_DougRaid isw_chjbfeec_DougRaid1 isw_chjbfeec_DougRaid2 isw_chjbfeec_DougRaid3 isw_chjbfeec_DougRaid5 isw_chjbfeec_DougRaid6 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/mapper/isw_chjbfeec_DougRaid5 / ext4 (rw,errors=remount-ro) /dev/sr0 /media/Windows7x86x64SK iso9660 (ro,nosuid,nodev,uid=1000,gid=1000,iocharset=utf8,mode=0400,dmode=0500,uhelper=udisks) ================= isw_chjbfeec_DougRaid1/grldr embedded menu: ================== -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- ================== isw_chjbfeec_DougRaid5/boot/grub/grub.cfg: ================== -------------------------------------------------------------------------------- # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### if [ -s $prefix/grubenv ]; then set have_grubenv=true load_env fi set default="0" if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function recordfail { set recordfail=1 if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi } function load_video { insmod vbe insmod vga insmod video_bochs insmod video_cirrus } insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c if loadfont /usr/share/grub/unicode.pf2 ; then set gfxmode=auto load_video insmod gfxterm insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c set locale_dir=($root)/boot/grub/locale set lang=en_US insmod gettext fi terminal_output gfxterm if [ "${recordfail}" = 1 ]; then set timeout=-1 else set timeout=10 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/05_debian_theme ### set menu_color_normal=white/black set menu_color_highlight=black/light-gray if background_color 44,0,30; then clear fi ### END /etc/grub.d/05_debian_theme ### ### BEGIN /etc/grub.d/10_linux ### function gfxmode { set gfxpayload="$1" if [ "$1" = "keep" ]; then set vt_handoff=vt.handoff=7 else set vt_handoff= fi } if [ ${recordfail} != 1 ]; then if [ -e ${prefix}/gfxblacklist.txt ]; then if hwmatch ${prefix}/gfxblacklist.txt 3; then if [ ${match} = 0 ]; then set linux_gfx_mode=keep else set linux_gfx_mode=text fi else set linux_gfx_mode=text fi else set linux_gfx_mode=keep fi else set linux_gfx_mode=text fi export linux_gfx_mode if [ "$linux_gfx_mode" != "text" ]; then load_video; fi menuentry 'Ubuntu, with Linux 3.2.0-24-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux /boot/vmlinuz-3.2.0-24-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-24-generic-pae } menuentry 'Ubuntu, with Linux 3.2.0-24-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c echo 'Loading Linux 3.2.0-24-generic-pae ...' linux /boot/vmlinuz-3.2.0-24-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-24-generic-pae } submenu "Previous Linux versions" { menuentry 'Ubuntu, with Linux 3.2.0-23-generic-pae' --class ubuntu --class gnu-linux --class gnu --class os { recordfail gfxmode $linux_gfx_mode insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux /boot/vmlinuz-3.2.0-23-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro quiet splash $vt_handoff initrd /boot/initrd.img-3.2.0-23-generic-pae } menuentry 'Ubuntu, with Linux 3.2.0-23-generic-pae (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os { recordfail insmod gzio insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c echo 'Loading Linux 3.2.0-23-generic-pae ...' linux /boot/vmlinuz-3.2.0-23-generic-pae root=UUID=bbab868a-ea53-4be3-ba7d-2737fe6cb24c ro recovery nomodeset echo 'Loading initial ramdisk ...' initrd /boot/initrd.img-3.2.0-23-generic-pae } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/20_memtest86+ ### menuentry "Memory test (memtest86+)" { insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux16 /boot/memtest86+.bin } menuentry "Memory test (memtest86+, serial console 115200)" { insmod part_msdos insmod ext2 set root='(/dev/mapper/isw_chjbfeec_DougRaid3,msdos1)' search --no-floppy --fs-uuid --set=root bbab868a-ea53-4be3-ba7d-2737fe6cb24c linux16 /boot/memtest86+.bin console=ttyS0,115200n8 } ### END /etc/grub.d/20_memtest86+ ### ### BEGIN /etc/grub.d/30_os-prober_proxy ### menuentry "Windows 7 (loader) (on /dev/mapper/isw_chjbfeec_DougRaid1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(sda,msdos1)' search --no-floppy --fs-uuid --set=root C89C73D19C73B910 chainloader +1 } ### END /etc/grub.d/30_os-prober_proxy ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### -------------------------------------------------------------------------------- ====================== isw_chjbfeec_DougRaid5/etc/fstab: ======================= -------------------------------------------------------------------------------- # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 /dev/mapper/isw_chjbfeec_DougRaid5 / ext4 errors=remount-ro 0 1 /dev/mapper/isw_chjbfeec_DougRaid6 none swap sw 0 0 -------------------------------------------------------------------------------- ========== isw_chjbfeec_DougRaid5: Location of files loaded by Grub: =========== GiB - GB File Fragment(s) = boot/grub/core.img 1 = boot/grub/grub.cfg 1 = boot/initrd.img-3.2.0-23-generic-pae 2 = boot/initrd.img-3.2.0-24-generic-pae 2 = boot/vmlinuz-3.2.0-23-generic-pae 1 = boot/vmlinuz-3.2.0-24-generic-pae 1 = initrd.img 2 = initrd.img.old 2 = vmlinuz 1 = vmlinuz.old 1 ======================== Unknown MBRs/Boot Sectors/etc: ======================== Unknown BootLoader on sda1 Unknown BootLoader on sda2 Unknown BootLoader on sda3 =============================== StdErr Messages: =============================== xz: (stdin): Compressed data is corrupt xz: (stdin): Compressed data is corrupt hexdump: /dev/sda1: No such file or directory hexdump: /dev/sda1: No such file or directory hexdump: /dev/sda2: No such file or directory hexdump: /dev/sda2: No such file or directory hexdump: /dev/sda3: No such file or directory hexdump: /dev/sda3: No such file or directory xz: (stdin): Compressed data is corrupt awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in awk: cmd. line:36: Math support is not compiled in How we can see the Windows part at grub is: menuentry "Windows 7 (loader) (on /dev/mapper/isw_chjbfeec_DougRaid1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(sda,msdos1)' search --no-floppy --fs-uuid --set=root C89C73D19C73B910 chainloader +1 } I tried a lot of combinations at the line: set root='(sda,msdos1)' , but no success I tried to change uuid to the /dev/mapper/isw_chjbfeec_DougRaid2 uuid, but the grub reports a error. I dont know what to do now. I really need to boot my windows partition. Someone knows what to do? Thanks........

    Read the article

  • Solaris: What comes next?

    - by alanc
    As you probably know by now, a few months ago, we released Solaris 11 after years of development. That of course means we now need to figure out what comes next - if Solaris 11 is “The First Cloud OS”, then what do we need to make future releases of Solaris be, to be modern and competitive when they're released? So we've been having planning and brainstorming meetings, and I've captured some notes here from just one of those we held a couple weeks ago with a number of the Silicon Valley based engineers. Now before someone sees an idea here and calls their product rep wanting to know what's up, please be warned what follows are rough ideas, and as I'll discuss later, none of them have any committment, schedule, working code, or even plan for integration in any possible future product at this time. (Please don't make me force you to read the full Oracle future product disclaimer here, you should know it by heart already from the front of every Oracle product slide deck.) To start with, we did some background research, looking at ideas from other Oracle groups, and competitive OS'es. We examined what was hot in the technology arena and where the interesting startups were heading. We then looked at Solaris to see where we could apply those ideas. Making Network Admins into Socially Networking Admins We all know an admin who has grumbled about being the only one stuck late at work to fix a problem on the server, or having to work the weekend alone to do scheduled maintenance. But admins are humans (at least most are), and crave companionship and community with their fellow humans. And even when they're alone in the server room, they're never far from a network connection, allowing access to the wide world of wonders on the Internet. Our solution here is not building a new social network - there's enough of those already, and Oracle even has its own Oracle Mix social network already. What we proposed is integrating Solaris features to help engage our system admins with these social networks, building community and bringing them recognition in the workplace, using achievement recognition systems as found in many popular gaming platforms. For instance, if you had a Facebook account, and a group of admin friends there, you could register it with our Social Network Utility For Facebook, and then your friends might see: Alan earned the achievement Critically Patched (April 2012) for patching all his servers. Matt is only at 50% - encourage him to complete this achievement today! To avoid any undue risk of advertising who has unpatched servers that are easier targets for hackers to break into, this information would be tightly protected via Facebook's world-renowned privacy settings to avoid it falling into the wrong hands. A related form of gamification we considered was replacing simple certfications with role-playing-game-style Experience Levels. Instead of just knowing an admin passed a test establishing a given level of competency, these would provide recruiters with a more detailed level of how much real-world experience an admin has. Achievements such as the one above would feed into it, but larger numbers of experience points would be gained by tougher or more critical tasks - such as recovering a down system, or migrating a service to a new platform. (As long as it was an Oracle platform of course - migrating to an HP or IBM platform would cause the admin to lose points with us.) Unfortunately, we couldn't figure out a good way to prevent (if you will) “gaming” the system. For instance, a disgruntled admin might decide to start ignoring warnings from FMA that a part is beginning to fail or skip preventative maintenance, in the hopes that they'd cause a catastrophic failure to earn more points for bolstering their resume as they look for a job elsewhere, and not worrying about the effect on your business of a mission critical server going down. More Z's for ZFS Our suggested new feature for ZFS was inspired by the worlds most successful Z-startup of all time: Zynga. Using the Social Network Utility For Facebook described above, we'd tie it in with ZFS monitoring to help you out when you find yourself in a jam needing more disk space than you have, and can't wait a month to get a purchase order through channels to buy more. Instead with the click of a button you could post to your group: Alan can't find any space in his server farm! Can you help? Friends could loan you some space on their connected servers for a few weeks, knowing that you'd return the favor when needed. ZFS would create a new filesystem for your use on their system, and securely share it with your system using Kerberized NFS. If none of your friends have space, then you could buy temporary use space in small increments at affordable rates right there in Facebook, using your Facebook credits, and then file an expense report later, after the urgent need has passed. Universal Single Sign On One thing all the engineers agreed on was that we still had far too many "Single" sign ons to deal with in our daily work. On the web, every web site used to have its own password database, forcing us to hope we could remember what login name was still available on each site when we signed up, and which unique password we came up with to avoid having to disclose our other passwords to a new site. In recent years, the web services world has finally been reducing the number of logins we have to manage, with many services allowing you to login using your identity from Google, Twitter or Facebook. So we proposed following their lead, introducing PAM modules for web services - no more would you have to type in whatever login name IT assigned and try to remember the password you chose the last time password aging forced you to change it - you'd simply choose which web service you wanted to authenticate against, and would login to your Solaris account upon reciept of a cookie from their identity service. Pinning notes to the cloud We also all noted that we all have our own pile of notes we keep in our daily work - in text files in our home directory, in notebooks we carry around, on white boards in offices and common areas, on sticky notes on our monitors, or on scraps of paper pinned to our bulletin boards. The contents of the notes vary, some are things just for us, some are useful for our groups, some we would share with the world. For instance, when our group moved to a new building a couple years ago, we had a white board in the hallway listing all the NIS & DNS servers, subnets, and other network configuration information we needed to set up our Solaris machines after the move. Similarly, as Solaris 11 was finishing and we were all learning the new network configuration commands, we shared notes in wikis and e-mails with our fellow engineers. Users may also remember one of the popular features of Sun's old BigAdmin site was a section for sharing scripts and tips such as these. Meanwhile, the online "pin board" at Pinterest is taking the web by storm. So we thought, why not mash those up to solve this problem? We proposed a new BigAddPin site where users could “pin” notes, command snippets, configuration information, and so on. For instance, once they had worked out the ideal Automated Installation manifest for their app server, they could pin it up to share with the rest of their group, or choose to make it public as an example for the world. Localized data, such as our group's notes on the servers for our subnet, could be shared only to users connecting from that subnet. And notes that they didn't want others to see at all could be marked private, such as the list of phone numbers to call for late night pizza delivery to the machine room, the birthdays and anniversaries they can never remember but would be sleeping on the couch if they forgot, or the list of automatically generated completely random, impossible to remember root passwords to all their servers. For greater integration with Solaris, we'd put support right into the command shells — redirect output to a pinned note, set your path to include pinned notes as scripts you can run, or bring up your recent shell history and pin a set of commands to save for the next time you need to remember how to do that operation. Location service for Solaris servers A longer term plan would involve convincing the hardware design groups to put GPS locators with wireless transmitters in future server designs. This would help both admins and service personnel trying to find servers in todays massive data centers, and could feed into location presence apps to help show potential customers that while they may not see many Solaris machines on the desktop any more, they are all around. For instance, while walking down Wall Street it might show “There are over 2000 Solaris computers in this block.” [Note: this proposal was made before the recent media coverage of a location service aggregrator app with less noble intentions, and in hindsight, we failed to consider what happens when such data similarly falls into the wrong hands. We certainly wouldn't want our app to be misinterpreted as “There are over $20 million dollars of SPARC servers in this building, waiting for you to steal them.” so it's probably best it was rejected.] Harnessing the power of the GPU for Security Most modern OS'es make use of the widespread availability of high powered GPU hardware in today's computers, with desktop environments requiring 3-D graphics acceleration, whether in Ubuntu Unity, GNOME Shell on Fedora, or Aero Glass on Windows, but we haven't yet made Solaris fully take advantage of this, beyond our basic offering of Compiz on the desktop. Meanwhile, more businesses are interested in increasing security by using biometric authentication, but must also comply with laws in many countries preventing discrimination against employees with physical limations such as missing eyes or fingers, not to mention the lost productivity when employees can't login due to tinted contacts throwing off a retina scan or a paper cut changing their fingerprint appearance until it heals. Fortunately, the two groups considering these problems put their heads together and found a common solution, using 3D technology to enable authentication using the one body part all users are guaranteed to have - pam_phrenology.so, a new PAM module that uses an array USB attached web cams (or just one if the user is willing to spin their chair during login) to take pictures of the users head from all angles, create a 3D model and compare it to the one in the authentication database. While Mythbusters has shown how easy it can be to fool common fingerprint scanners, we have not yet seen any evidence that people can impersonate the shape of another user's cranium, no matter how long they spend beating their head against the wall to reshape it. This could possibly be extended to group users, using modern versions of some of the older phrenological studies, such as giving all users with long grey beards access to the System Architect role, or automatically placing users with pointy spikes in their hair into an easy use mode. Unfortunately, there are still some unsolved technical challenges we haven't figured out how to overcome. Currently, a visit to the hair salon causes your existing authentication to expire, and some users have found that shaving their heads is the only way to avoid bad hair days becoming bad login days. Reaction to these ideas After gathering all our notes on these ideas from the engineering brainstorming meeting, we took them in to present to our management. Unfortunately, most of their reaction cannot be printed here, and they chose not to accept any of these ideas as they were, but they did have some feedback for us to consider as they sent us back to the drawing board. They strongly suggested our ideas would be better presented if we weren't trying to decipher ink blotches that had been smeared by the condensation when we put our pint glasses on the napkins we were taking notes on, and to that end let us know they would not be approving any more engineering offsites in Irish themed pubs on the Friday of a Saint Patrick's Day weekend. (Hopefully they mean that situation specifically and aren't going to deny the funding for travel to this year's X.Org Developer's Conference just because it happens to be in Bavaria and ending on the Friday of the weekend Oktoberfest starts.) They recommended our research techniques could be improved over just sitting around reading blogs and checking our Facebook, Twitter, and Pinterest accounts, such as considering input from alternate viewpoints on topics such as gamification. They also mentioned that Oracle hadn't fully adopted some of Sun's common practices and we might have to try harder to get those to be accepted now that we are one unified company. So as I said at the beginning, don't pester your sales rep just yet for any of these, since they didn't get approved, but if you have better ideas, pass them on and maybe they'll get into our next batch of planning.

    Read the article

  • Juju Openstack bundle: Can't launch an instance

    - by user281985
    Deployed bundle:~makyo/openstack/2/openstack, on top of 7 physical boxes and 3 virtual ones. After changing vip_iface strings to point to right devices, e.g., br0 instead of eth0, and defining "/mnt/loopback|30G", in Cinder's block-device string, am able to navigate through openstack dashboard, error free. Following http://docs.openstack.org/grizzly/openstack-compute/install/apt/content/running-an-instance.html instructions, attempted to launch cirros 0.3.1 image; however, novalist shows the instance in error state. ubuntu@node7:~$ nova --debug boot --flavor 1 --image 28bed1bc-bc1c-4533-beee-8e0428ad40dd --key_name key2 --security_group default cirros REQ: curl -i http://keyStone.IP:5000/v2.0/tokens -X POST -H "Content-Type: application/json" -H "Accept: application/json" -H "User-Agent: python-novaclient" -d '{"auth": {"tenantName": "admin", "passwordCredentials": {"username": "admin", "password": "openstack"}}}' INFO (connectionpool:191) Starting new HTTP connection (1): keyStone.IP DEBUG (connectionpool:283) "POST /v2.0/tokens HTTP/1.1" 200 None RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:02 GMT', 'transfer-encoding': 'chunked', 'vary': 'X-Auth-Token', 'content-type': 'application/json'} RESP BODY: {"access": {"token": {"expires": "2014-06-11T00:01:02Z", "id": "3eefa1837d984426a633fe09259a1534", "tenant": {"description": "Created by Juju", "enabled": true, "id": "08cff06d13b74492b780d9ceed699239", "name": "admin"}}, "serviceCatalog": [{"endpoints": [{"adminURL": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239", "publicURL": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239"}], "endpoints_links": [], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "http://nova.cloud.controller:9696", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:9696", "publicURL": "http://nova.cloud.controller:9696"}], "endpoints_links": [], "type": "network", "name": "quantum"}, {"endpoints": [{"adminURL": "http://nova.cloud.controller:3333", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:3333", "publicURL": "http://nova.cloud.controller:3333"}], "endpoints_links": [], "type": "s3", "name": "s3"}, {"endpoints": [{"adminURL": "http://i.p.s.36:9292", "region": "RegionOne", "internalURL": "http://i.p.s.36:9292", "publicURL": "http://i.p.s.36:9292"}], "endpoints_links": [], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://i.p.s.39:8776/v1/08cff06d13b74492b780d9ceed699239", "region": "RegionOne", "internalURL": "http://i.p.s.39:8776/v1/08cff06d13b74492b780d9ceed699239", "publicURL": "http://i.p.s.39:8776/v1/08cff06d13b74492b780d9ceed699239"}], "endpoints_links": [], "type": "volume", "name": "cinder"}, {"endpoints": [{"adminURL": "http://nova.cloud.controller:8773/services/Cloud", "region": "RegionOne", "internalURL": "http://nova.cloud.controller:8773/services/Cloud", "publicURL": "http://nova.cloud.controller:8773/services/Cloud"}], "endpoints_links": [], "type": "ec2", "name": "ec2"}, {"endpoints": [{"adminURL": "http://keyStone.IP:35357/v2.0", "region": "RegionOne", "internalURL": "http://keyStone.IP:5000/v2.0", "publicURL": "http://i.p.s.44:5000/v2.0"}], "endpoints_links": [], "type": "identity", "name": "keystone"}], "user": {"username": "admin", "roles_links": [], "id": "b3730a52a32e40f0a9500440d1ef1c7d", "roles": [{"id": "e020001eb9a049f4a16540238ab158aa", "name": "Admin"}, {"id": "b84fbff4d5554d53bbbffdaad66b56cb", "name": "KeystoneServiceAdmin"}, {"id": "129c8b49d42b4f0796109aaef2069aa9", "name": "KeystoneAdmin"}], "name": "admin"}}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd HTTP/1.1" 200 719 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:03 GMT', 'x-compute-request-id': 'req-7f3459f8-d3d5-47f1-97a3-8407a4419a69', 'content-type': 'application/json', 'content-length': '719'} RESP BODY: {"image": {"status": "ACTIVE", "updated": "2014-06-09T22:17:54Z", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "bookmark"}, {"href": "http://External.Public.Port:9292/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "type": "application/vnd.openstack.image", "rel": "alternate"}], "id": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "OS-EXT-IMG-SIZE:size": 13147648, "name": "Cirros 0.3.1", "created": "2014-06-09T22:17:54Z", "minDisk": 0, "progress": 100, "minRam": 0, "metadata": {}}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 HTTP/1.1" 200 418 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:04 GMT', 'x-compute-request-id': 'req-2c153110-6969-4f3a-b51c-8f1a6ce75bee', 'content-type': 'application/json', 'content-length': '418'} RESP BODY: {"flavor": {"name": "m1.tiny", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "bookmark"}], "ram": 512, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 0, "id": "1"}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers -X POST -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" -d '{"server": {"name": "cirros", "imageRef": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "key_name": "key2", "flavorRef": "1", "max_count": 1, "min_count": 1, "security_groups": [{"name": "default"}]}}' INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "POST /v1.1/08cff06d13b74492b780d9ceed699239/servers HTTP/1.1" 202 436 RESP: [202] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-41e53086-6454-4efb-bb35-a30dc2c780be', 'content-type': 'application/json', 'location': 'http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43', 'content-length': '436'} RESP BODY: {"server": {"security_groups": [{"name": "default"}], "OS-DCF:diskConfig": "MANUAL", "id": "2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "bookmark"}], "adminPass": "oFRbvRqif2C8"}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43 HTTP/1.1" 200 1349 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-d91d0858-7030-469d-8e55-40e05e4d00fd', 'content-type': 'application/json', 'content-length': '1349'} RESP BODY: {"server": {"status": "BUILD", "updated": "2014-06-10T00:01:05Z", "hostId": "", "OS-EXT-SRV-ATTR:host": null, "addresses": {}, "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/servers/2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "rel": "bookmark"}], "key_name": "key2", "image": {"id": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "links": [{"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "bookmark"}]}, "OS-EXT-STS:task_state": "scheduling", "OS-EXT-STS:vm_state": "building", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": null, "flavor": {"id": "1", "links": [{"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "bookmark"}]}, "id": "2eb5e3ad-3044-41c1-bbb7-10f398f83e43", "security_groups": [{"name": "default"}], "OS-EXT-AZ:availability_zone": "nova", "user_id": "b3730a52a32e40f0a9500440d1ef1c7d", "name": "cirros", "created": "2014-06-10T00:01:04Z", "tenant_id": "08cff06d13b74492b780d9ceed699239", "OS-DCF:diskConfig": "MANUAL", "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 0, "config_drive": "", "metadata": {}}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/flavors/1 HTTP/1.1" 200 418 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-896c0120-1102-4408-9e09-cd628f2dd699', 'content-type': 'application/json', 'content-length': '418'} RESP BODY: {"flavor": {"name": "m1.tiny", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/flavors/1", "rel": "bookmark"}], "ram": 512, "OS-FLV-DISABLED:disabled": false, "vcpus": 1, "swap": "", "os-flavor-access:is_public": true, "rxtx_factor": 1.0, "OS-FLV-EXT-DATA:ephemeral": 0, "disk": 0, "id": "1"}} REQ: curl -i http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd -X GET -H "X-Auth-Project-Id: admin" -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: 3eefa1837d984426a633fe09259a1534" INFO (connectionpool:191) Starting new HTTP connection (1): nova.cloud.controller DEBUG (connectionpool:283) "GET /v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd HTTP/1.1" 200 719 RESP: [200] {'date': 'Tue, 10 Jun 2014 00:01:05 GMT', 'x-compute-request-id': 'req-454e9651-c247-4d31-8049-6b254de050ae', 'content-type': 'application/json', 'content-length': '719'} RESP BODY: {"image": {"status": "ACTIVE", "updated": "2014-06-09T22:17:54Z", "links": [{"href": "http://nova.cloud.controller:8774/v1.1/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "self"}, {"href": "http://nova.cloud.controller:8774/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "rel": "bookmark"}, {"href": "http://External.Public.Port:9292/08cff06d13b74492b780d9ceed699239/images/28bed1bc-bc1c-4533-beee-8e0428ad40dd", "type": "application/vnd.openstack.image", "rel": "alternate"}], "id": "28bed1bc-bc1c-4533-beee-8e0428ad40dd", "OS-EXT-IMG-SIZE:size": 13147648, "name": "Cirros 0.3.1", "created": "2014-06-09T22:17:54Z", "minDisk": 0, "progress": 100, "minRam": 0, "metadata": {}}} +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | OS-EXT-STS:task_state | scheduling | | image | Cirros 0.3.1 | | OS-EXT-STS:vm_state | building | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 | | flavor | m1.tiny | | id | 2eb5e3ad-3044-41c1-bbb7-10f398f83e43 | | security_groups | [{u'name': u'default'}] | | user_id | b3730a52a32e40f0a9500440d1ef1c7d | | OS-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 0 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | | status | BUILD | | updated | 2014-06-10T00:01:05Z | | hostId | | | OS-EXT-SRV-ATTR:host | None | | key_name | key2 | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | name | cirros | | adminPass | oFRbvRqif2C8 | | tenant_id | 08cff06d13b74492b780d9ceed699239 | | created | 2014-06-10T00:01:04Z | | metadata | {} | +-------------------------------------+--------------------------------------+ ubuntu@node7:~$ ubuntu@node7:~$ nova list +--------------------------------------+--------+--------+----------+ | ID | Name | Status | Networks | +--------------------------------------+--------+--------+----------+ | 2eb5e3ad-3044-41c1-bbb7-10f398f83e43 | cirros | ERROR | | +--------------------------------------+--------+--------+----------+ ubuntu@node7:~$ var/log/nova/nova-compute.log shows the following error: ... 2014-06-10 00:01:06.048 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Attempting claim: memory 512 MB, disk 0 GB, VCPUs 1 2014-06-10 00:01:06.049 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Total Memory: 3885 MB, used: 512 MB 2014-06-10 00:01:06.049 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Memory limit: 5827 MB, free: 5315 MB 2014-06-10 00:01:06.049 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Total Disk: 146 GB, used: 0 GB 2014-06-10 00:01:06.050 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Disk limit not specified, defaulting to unlimited 2014-06-10 00:01:06.050 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Total CPU: 2 VCPUs, used: 0 VCPUs 2014-06-10 00:01:06.050 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] CPU limit not specified, defaulting to unlimited 2014-06-10 00:01:06.051 AUDIT nova.compute.claims [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Claim successful 2014-06-10 00:01:06.963 WARNING nova.network.quantumv2.api [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] No network configured! 2014-06-10 00:01:08.347 ERROR nova.compute.manager [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Instance failed to spawn 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Traceback (most recent call last): 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1118, in _spawn 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] self._legacy_nw_info(network_info), 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 703, in _legacy_nw_info 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] network_info = network_info.legacy() 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] AttributeError: 'list' object has no attribute 'legacy' 2014-06-10 00:01:08.347 32223 TRACE nova.compute.manager [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] 2014-06-10 00:01:08.919 AUDIT nova.compute.manager [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Terminating instance 2014-06-10 00:01:09.712 32223 ERROR nova.virt.libvirt.driver [-] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] During wait destroy, instance disappeared. 2014-06-10 00:01:09.718 INFO nova.virt.libvirt.firewall [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Attempted to unfilter instance which is not filtered 2014-06-10 00:01:09.719 INFO nova.virt.libvirt.driver [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Deleting instance files /var/lib/nova/instances/2eb5e3ad-3044-41c1-bbb7-10f398f83e43 2014-06-10 00:01:10.044 ERROR nova.compute.manager [req-41e53086-6454-4efb-bb35-a30dc2c780be b3730a52a32e40f0a9500440d1ef1c7d 08cff06d13b74492b780d9ceed699239] [instance: 2eb5e3ad-3044-41c1-bbb7-10f398f83e43] Error: ['Traceback (most recent call last):\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 864, in _run_instance\n set_access_ip=set_access_ip)\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1123, in _spawn\n LOG.exception(_(\'Instance failed to spawn\'), instance=instance)\n', ' File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__\n self.gen.next()\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1118, in _spawn\n self._legacy_nw_info(network_info),\n', ' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 703, in _legacy_nw_info\n network_info = network_info.legacy()\n', "AttributeError: 'list' object has no attribute 'legacy'\n"] 2014-06-10 00:01:40.951 32223 AUDIT nova.compute.resource_tracker [-] Auditing locally available compute resources 2014-06-10 00:01:41.072 32223 AUDIT nova.compute.resource_tracker [-] Free ram (MB): 2861 2014-06-10 00:01:41.072 32223 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 146 2014-06-10 00:01:41.073 32223 AUDIT nova.compute.resource_tracker [-] Free VCPUS: 1 2014-06-10 00:01:41.262 32223 INFO nova.compute.resource_tracker [-] Compute_service record updated for node5:node5.maas ... Can't seem to find any entries in quantum.conf related to "legacy". Any help would be appreciated. Cheers,

    Read the article

  • Why Wifi no longer works 12.04.1

    - by Roger
    starting this morning over wifi (Realtek RTL8188CE) on CLEVO W253HU. May be due to the update before yesterday, more pilot managed, but somehow it worked yesterday. If someone has an idea of the problem. Back command lines: cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04.1 LTS" lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 006: ID 192f:0416 Avago Technologies, Pte. Bus 002 Device 004: ID 5986:0315 Acer, Inc lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 02:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) 03:00.0 Ethernet controller: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller (rev 05) 03:00.1 System peripheral: JMicron Technology Corp. SD/MMC Host Controller (rev 90) 03:00.2 SD Host controller: JMicron Technology Corp. Standard SD Host Controller (rev 90) 03:00.3 System peripheral: JMicron Technology Corp. MS Host Controller (rev 90) lspci -nn | grep -i net 02:00.0 Network controller [0280]: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter [10ec:8176] (rev 01) 03:00.0 Ethernet controller [0200]: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller [197b:0250] (rev 05) lspci -k 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: agpgart-intel 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: i915 Kernel modules: i915 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: mei Kernel modules: mei 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: ehci_hcd 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: snd_hda_intel Kernel modules: snd-hda-intel 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.2 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 3 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) Kernel driver in use: pcieport Kernel modules: shpchp 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: ehci_hcd 00:1f.0 ISA bridge: Intel Corporation HM65 Express Chipset Family LPC Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel modules: iTCO_wdt 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: ahci 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel modules: i2c-i801 02:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) Subsystem: Realtek Semiconductor Co., Ltd. Device 9196 Kernel modules: rtl8192ce 03:00.0 Ethernet controller: JMicron Technology Corp. JMC250 PCI Express Gigabit Ethernet Controller (rev 05) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: jme Kernel modules: jme 03:00.1 System peripheral: JMicron Technology Corp. SD/MMC Host Controller (rev 90) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: sdhci-pci Kernel modules: sdhci-pci 03:00.2 SD Host controller: JMicron Technology Corp. Standard SD Host Controller (rev 90) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel modules: sdhci-pci 03:00.3 System peripheral: JMicron Technology Corp. MS Host Controller (rev 90) Subsystem: CLEVO/KAPOK Computer Device 4140 Kernel driver in use: jmb38x_ms Kernel modules: jmb38x_ms sudo lshw -C network *-network NON-RÉCLAMÉ description: Network controller produit: RTL8188CE 802.11b/g/n WiFi Adapter fabriquant: Realtek Semiconductor Co., Ltd. identifiant matériel: 0 information bus: pci@0000:02:00.0 version: 01 bits: 64 bits horloge: 33MHz fonctionnalités: pm msi pciexpress cap_list configuration: latency=0 ressources: portE/S:e000(taille=256) mémoire:f7d00000-f7d03fff *-network description: Ethernet interface produit: JMC250 PCI Express Gigabit Ethernet Controller fabriquant: JMicron Technology Corp. identifiant matériel: 0 information bus: pci@0000:03:00.0 nom logique: eth0 version: 05 numéro de série: 00:90:f5:c1:c6:45 taille: 100Mbit/s capacité: 1Gbit/s bits: 32 bits horloge: 33MHz fonctionnalités: pm pciexpress msix msi bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=jme driverversion=1.0.8 duplex=full ip=192.168.1.54 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s ressources: irq:44 mémoire:f7c20000-f7c23fff portE/S:d100(taille=128) portE/S:d000(taille=256) mémoire:f7c10000-f7c1ffff mémoire:f7c00000-f7c0ffff lsmod Module Size Used by btusb 18288 0 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 11 btusb,rfcomm,bnep parport_pc 32866 0 ppdev 17113 0 binfmt_misc 17540 1 snd_hda_codec_realtek 224173 0 dm_crypt 23125 0 snd_hda_codec_hdmi 32474 0 uvcvideo 72627 0 videodev 98259 1 uvcvideo v4l2_compat_ioctl32 17128 1 videodev snd_hda_intel 33773 2 snd_hda_codec 127706 3 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel snd_hwdep 13668 1 snd_hda_codec snd_pcm 97188 3 snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec snd_seq_midi 13324 0 snd_rawmidi 30748 1 snd_seq_midi jmb38x_ms 17646 0 psmouse 87692 0 serio_raw 13211 0 memstick 16569 1 jmb38x_ms snd_seq_midi_event 14899 1 snd_seq_midi rtl8192ce 84826 0 rtl8192c_common 75767 1 rtl8192ce rtlwifi 111202 1 rtl8192ce snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq mac80211 506816 3 rtl8192ce,rtl8192c_common,rtlwifi mac_hid 13253 0 snd 78855 14 snd_hda_codec_realtek,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device cfg80211 205544 2 rtlwifi,mac80211 soundcore 15091 1 snd snd_page_alloc 18529 2 snd_hda_intel,snd_pcm mei 41616 0 lp 17799 0 parport 46562 3 parport_pc,ppdev,lp usbhid 47199 0 hid 99559 1 usbhid i915 473035 3 drm_kms_helper 46978 1 i915 drm 242038 4 i915,drm_kms_helper jme 41259 0 i2c_algo_bit 13423 1 i915 sdhci_pci 18826 0 sdhci 33205 1 sdhci_pci wmi 19256 0 video 19596 1 i915 iwconfig lo no wireless extensions. eth0 no wireless extensions. ifconfig eth0 Link encap:Ethernet HWaddr 00:90:f5:c1:c6:45 inet adr:192.168.1.54 Bcast:192.168.1.255 Masque:255.255.255.0 adr inet6: fe80::290:f5ff:fec1:c645/64 Scope:Lien UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Packets reçus:4513 erreurs:0 :0 overruns:0 frame:0 TX packets:4359 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:1000 Octets reçus:3471675 (3.4 MB) Octets transmis:712722 (712.7 KB) Interruption:44 lo Link encap:Boucle locale inet adr:127.0.0.1 Masque:255.0.0.0 adr inet6: ::1/128 Scope:Hôte UP LOOPBACK RUNNING MTU:16436 Metric:1 Packets reçus:686 erreurs:0 :0 overruns:0 frame:0 TX packets:686 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 Octets reçus:64556 (64.5 KB) Octets transmis:64556 (64.5 KB) sudo iwlist scan lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. uname -r -m 3.2.0-30-generic x86_64 cat /etc/network/interfaces auto lo iface lo inet loopback nm-tool NetworkManager Tool State: connected (global) - Device: eth0 [Connexion filaire 1] ------------------------------------------ Type: Wired Driver: jme State: connected Default: yes HW Address: 00:90:F5:C1:C6:45 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: on IPv4 Settings: Address: 192.168.1.54 Prefix: 24 (255.255.255.0) Gateway: 192.168.1.1 DNS: 192.168.1.1 sudo rfkill listrfkill list 1: hci0: Bluetooth Soft blocked: no Hard blocked: no The absence of line "Kernel driver in use:" the return of lspci-k made ??me think that it is not loaded yet but he seems to be. lsmod | grep rtl8192ce rtl8192ce 137478 0 rtlwifi 118749 1 rtl8192ce mac80211 506816 2 rtl8192ce,rtlwifi I found something disturbing in / var / log / syslog Sep 14 11:40:11 pcroger kernel: [ 64.048783] rtl8192ce-0:rtl92c_init_sw_vars():<0-0> Failed to request firmware! Sep 14 11:40:11 pcroger kernel: [ 64.048795] rtlwifi-0:rtl_pci_probe():<0-0> Can't init_sw_vars. Sep 14 11:40:11 pcroger kernel: [ 64.048835] rtl8192ce 0000:02:00.0: PCI INT A disabled Sep 14 11:40:11 pcroger kernel: [ 64.943345] ata1.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x6 frozen Sep 14 11:40:11 pcroger kernel: [ 64.943358] ata1.00: failed command: READ FPDMA QUEUED Sep 14 11:40:11 pcroger kernel: [ 64.943371] ata1.00: cmd 60/00:00:00:68:6a/04:00:0b:00:00/40 tag 0 ncq 524288 in Sep 14 11:40:11 pcroger kernel: [ 64.943374] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Sep 14 11:40:11 pcroger kernel: [ 64.943381] ata1.00: status: { DRDY } Ubuntu and takes forever to start (2 min).

    Read the article

  • CodePlex Daily Summary for Friday, October 26, 2012

    CodePlex Daily Summary for Friday, October 26, 2012Popular ReleasesPowerShell Community Extensions: 2.1 Production: PowerShell Community Extensions 2.1 Release NotesOct 25, 2012 This version of PSCX supports both Windows PowerShell 2.0 and 3.0. See the ReleaseNotes.txt download above for more information.Building Windows 8 Apps with C# and XAML: Full Source Chapters 1 - 10 for Windows 8 Fix 001: This is the full source from all chapters of the book, compiled and tested on Windows 8 RTM. Includes a fix for the Netflix example from Chapter 6 that was missing a service reference.PdfReport: PdfReport 1.3: - Removed the limitation of defining non duplicate column names. See DuplicateColumns sample for more info. - Added horizontal stack panel mode. See CharacterMap sample for more info. - Added pdfStamper to onFillAcroForm of PdfTemplate. See QuestionsAcroForm sample for more info. Added 6 new samples (http://pdfreport.codeplex.com/SourceControl/BrowseLatest): - AccountingBalanceColumn - CharacterMap - CustomPriceNumber - DuplicateColumns - QuestionsAcroForm - QuestionsFormuComponents: uComponents v5.1.0: Continuing our mammoth 88341 release, we bring you v5.1 What's new in uComponents v5.1?New Data Types: SQL AutoComplete SQL DropDownList XPath AutoComplete New component: uMapper Umbraco compatibilityThis release is compatible with Umbraco 4.8.0 (and above). For previous versions, please use 90019 with Umbraco 4.5.2+ and 93259 with Umbraco 4.7.1+ (To reiterate: this release does not work with Umbraco 4.7.2 and earlier versions.) Legacy and retirementThere are a number of components t...Umbraco CMS: Umbraco 4.9.1: Umbraco 4.9.1 is a bugfix release to fix major issues in 4.9.0 BugfixesThe full list of fixes can be found in the issue tracker's filtered results. A summary: Split buttons work again, you can now also scroll easier when the list is too long for the screen Media and Content pickers have information of the full path of the picked item Fixed: Publish status may not be accurate on nodes with large doctypes Fixed: 2 media folders and recycle bins after upgrade to 4.9 The template/code ...AcDown????? - AcDown Downloader Framework: AcDown????? v4.2.2: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??2...WPF About Box: WPF About Box 1.2.1.1: Using more established names.Rawr: Rawr 5.0.2: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...MCEBuddy 2.x: MCEBuddy 2.3.5: Changelog for 2.3.5 (32bit and 64bit) 1. Fixed a bug causing MCEBuddy to crash during or after installation on Windows XP 2. Bugfix for resource leak with UPnP which would lead to a failure after many days 3. Increased the UPnP discovery re-scan interval from 10 minutes to 30 minutes 4. Added support for specifying TVDB and IMDB id’s in the conversion task page (forcing the internet lookup for metadata)CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1025.5): [NEW] Support for connecting to CRM Online via Office 365 (OSDP) [NEW] Current connection information and loaded ribbon name are displayed in the status bar [IMPROVED] Connect dialog minor improvements and error message descriptions [IMPROVED] Connecting to a CRM server will close currently loaded ribbon upon confirmation (if another ribbon was loaded previously) [FIX] Fixed bug in Open Ribbon dialog which would not allow to refresh entity list more than onceSite Backup Repackager: Solution: Adding Feature References Detected section so user can check feature references to include in the new package.Tombola XNA: TombolaXNA 0.1: First release alphaSplitOS: SplitOS v0.0.4a: SplitOS v0.0.4a TUI Basic windows commands such as 'echo', 'cls', ... Basic linux commands such as 'print', 'clear', ... GUI Taskbar Animated mouse Font Open SAL Open SplitOS Application Language Compiler ( very unstable! )Readable Passphrase Generator: KeePass Plugin 0.8.0: Changes: Interrogative phrases (questions) like why did the statesman burgle amidst lucid sunlamps Support transitive / intransitive verbs (whether a verb needs a subject or not). Change adverbs to be either before or after the verb, at random. Add an "equal" version of each strength, where each possibility is equally likely (for password purists). 3401 words in the default dictionary (~400 more than previous release) Fixed bugs when choosing verb tensesMicrosoft Ajax Minifier: Microsoft Ajax Minifier 4.72: Fix for Issue #18819 - bad optimization of return/assign operator.DNN Module Creator: 01.01.00: Updated templates for DNN7 ( ie. DAL2, Web Service API ). Numerous bug fixes and enhancements.WPF Application Framework (WAF): WPF Application Framework (WAF) 2.5.0.390: Version 2.5.0.390 (Release Candidate): This release contains the source code of the WPF Application Framework (WAF) and the sample applications. Requirements .NET Framework 4.0 (The package contains a solution file for Visual Studio 2010) The unit test projects require Visual Studio 2010 Professional Changelog Legend: [B] Breaking change; [O] Marked member as obsolete WAF: Fix recent file list remove issue. WAF: Minor code improvements. BookLibrary: Fix Blend design time support o...Fiskalizacija za developere: FiskalizacijaDev 1.1: Ovo je prva nadogradnja ovog projekta nakon inicijalnog predstavljanja - dodali smo nekoliko feature-a, bilo zato što smo sami primijetili da bi ih bilo dobro dodati, bilo na osnovu vaših sugestija - hvala svima koji su se ukljucili :) Ovo su stvari riješene u v1.1.: 1. Bilo bi dobro da se XML dokument koji se šalje u CIS može snimiti u datoteku (http://fiskalizacija.codeplex.com/workitem/612) 2. Podrška za COM DLL (VB6) (http://fiskalizacija.codeplex.com/workitem/613) 3. Podrška za DOS (unu...Liberty: v3.4.0.0 Release 20th October 2012: Change Log -Added -Halo 4 support (invincibility, ammo editing) -Reach A warning dialog now shows up when you first attempt to swap a weapon -Fixed -A few minor bugsClosedXML - The easy way to OpenXML: ClosedXML 0.68.1: ClosedXML now resolves formulas! Yes it finally happened. If you call cell.Value and it has a formula the library will try to evaluate the formula and give you the result. For example: var wb = new XLWorkbook(); var ws = wb.AddWorksheet("Sheet1"); ws.Cell("A1").SetValue(1).CellBelow().SetValue(1); ws.Cell("B1").SetValue(1).CellBelow().SetValue(1); ws.Cell("C1").FormulaA1 = "\"The total value is: \" & SUM(A1:B2)"; var...New Projects40FINGERS DotNetNuke Windows8 Pin: 40FINGERS Windows8 Pin is a DotNetNuke Module that will enable pinning of your website to the Windows 8 Start Screen.clowncar: a tool for sharing your utils folder and keeping it up to dateConnection Strings for .NET 2: Merupakan class yang berisi beberapa connection string khusus untuk aplikasi .NET, class ini menggunakan referensi dari http://connectionstrings.comCRM 2011 Web Resource Linker/Publisher: This tool is a visual studio 2012 add-in that allows developers to quickly link and publish web resources to dynamics crm 2011 while inside the VS 2012 IDE.Dice Roller: A simple simulator of a dice roller.EFTracing & Logging: Demonstrates using the “Entity Framework Tracing” ElFinder.Net Connector: .NET connector for elfinder file manager version 2.x. Project contain library and ASP.NET MVC sample. Excel and PowerPivot Refresh Service: A Windows Service for periodically refreshing the data in Excel files, along with any embedded PowerPivot cubes.finalwebsite: This is my final website project.Flickr Reranking ASP.NET MVC: ASP.NET MVC, .NET, Flickr APIglobulin: videojuegoHealthVault ASP.NET MVC Sample: This project provides an object model for building HealthVault-connected applications with ASP.NET MVCHuman resource management - WFA: This project made ​​solution that allows you to control, human resource management at the company.Is a comprehensive system of recruitment planning, tracking...jFluent: jFluent is a Fluent style, light-weight validation framework for client-side validation. It is a jQuery plugin which can be used in ASP .NET MVC.jvp8: jvp8 is a dual-licensed commercial/GPL native wrapper which allows Java applications to easily use the VP8 video codec.Korean Baseball for Windows Phone: Windows Phone? ?? ???? ?????????. ???, ???, ???? ?????.Meus Projetos: Projetos andamentoMobySharp: MobySharp is an implementation of the Mobypicture.com API written in C#MS-SQL Server Post Exploitation Samples: Sample source code files and precompiled library files that correspond to the MS-SQL Post-exploitation presentation "When Databases Attack", by Rob Beck.On the Fly: On the Fly is a utility that enables you to compile your TypeScript files on the fly. Compiling your .ts files is just a click away!OpenMVCRM: Open source CRM project built on ASP.NET MVCpFinance: This is the application to facilitate individuals in managing their finance.Posh for Jammer: Jammer is are some PowerShell Functions calling the Yammer API.Quick Encoding Utility: This is a quick desktop utility which would provide a sneak preview of base64 string encoding (both to and from)ResearchWM: This is a new projecttestddhg1025201201: ertesttom10252012git01: fdstesttom10252012git02: fdstesttom10252012tfs01: dsa dsaUniSync: blah blahUnits for .NET: Strongly typed physical quantities and unit conversion.Web Crop Image (Restored): Cem Sisman was the original author of WebCropImage. After he deleted the project from CodePlex, I brought it back... along with over 50 bug fixes.

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145  | Next Page >