Search Results

Search found 1976 results on 80 pages for 'nic wise'.

Page 74/80 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • How to circumvent ISP Limiting "Unknown" traffic - (SSH)Proxy, VPN

    - by connery
    I am having issues with using a proxy/VPN, with my current ISP (Comenersol, Spain). From my point of view they limit traffic by protocol or by traffic they "know" and "dont know". I'll explain my findings so far below. Internet connection in Spain: ~400-420KByte/sec (speedtest.net) OpenVPN Server in Sweden(pfsense): 100/100Mbit. LZO Compression. TCP. Tun. Aes128 Squid Proxy server in Sweden (pfsense): 100/100 (same box as the vpn server). Plain, no encryption. Runs in stealth mode to hide the use of proxy. NOT running OpenVPN or Squid Proxy, this is my findings: When I download a file from my pfsense box in Sweden, I get maximum speed When I run speedtest.net and choose any european server (including Swedish), I get max speed When I download a torrent (with non default port above 10K), I get limited to ~100KByte/sec. Encryption is turned off If I download something through https, I get max speed Running either Squid Proxy or VPN, this is my findings When I download a file from my pfsense box in Sweden, I get ~100KByte/sec When I run speedtest.net and choose any european server (including Swedish and Spanish), I get ~100Kbyte/sec When I download a torrent, I get same limitation ~100KByte/sec When I download something through https, I get ~100KByte/sec I verify the speeds above with speedtest.net measure, firefox measure in addition to having bmon running in terminal in the background. This way I am certain that the speeds I get presented, are in fact correct. If I connect through a different ISP with VPN or Squid Proxy, I get better speeds (400KByte/sec ++) In short: Whenever I tunnel my traffic through Sweden, my SPanish ISP throttles the traffic. I thought tunneling it through Squid would solve the issue, since I then would no longer hide my traffic through encryption. This does not seem to be the case. Wget and fetch gives same result. I did not try 'nc', but I assume this would give the same result. Does anyone know how to circumvent this issue? I would very much like to be able to get full speed with Swedish ip, as this would make me able to stream TV at higher quality than today. 100KByte/sec just does not cut it quality wise. Thanks for reading. Looking forward for your help.

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • Data capture from other sheet into Summary sheet

    - by Hemant
    an Excel workbook which has Summary sheet, Pending and Master Sheet. My requirement is below and try to develop a Macro or VB logic for excel • I want to control this workbook from Summary sheet. o Generate Fault Summary – ? I have set logic but if doesn’t give warning if sheet name is exists , so need to add this logic . ? When we press the Fault Report Summary command button then it copy the master sheet with cell “A6” Name and will hide the Master sheet. Again when you select the another Month name then it will generate the sheet for that month name. o Generate Toll System Uptime ? When I select the sheet name and “Week” then Press the “Enter “Command button then it should get the result from that sheet number . Each sheet number has Month detail in B2 Cell. ? To calculate the Uptime formula for Week wise is • Week-01 = (1680-SUMIFS(L5:L23,B5:B23,"="&B2,B5:B23,"<="&(B2+6)))/1680 • Week-02 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+7),B5:B23,"<="&(B2+13)))/1680 • Week-03 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+14),B5:B23,"<="&(B2+20)))/1680 • Week-04 =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2+21),B5:B23,"<="&(B2+27)))/1680 • Month =(1680-SUMIFS(L5:L23,B5:B23,"="&(B2),B5:B23,"<="&(DATE(YEAR(B2),1+MONTH(B2),1)-1)))/1680 ? Result should reflect in Summary sheet at B18 cell . o Pending Fault Report Summary ? When segregate the report on its status like which one is open or Close . It is open then it is Pending Fault Report and when it is Close status it means it is closed. ? If any fault which has OPEN status in all sheets(Jan-13,Feb-13,Mar-13….etc) then it should be come as well as in Pending Sheet which ascending date order. ? When it’s status is changed then it should be moved in that month sheet or nearby fault created date. It status is close then it should not be available in pending sheet as it’s status is Closed. ? Each fault has Reported date and we monitor all fault according reported date. ? When we press the Update Fault Report Summary command button then it should update as above logic. ? Some time we export the Pending fault report , so date calendar should be present in Start and End date to Choose the date. When we press the Export command line then it should export the Pending fault report and able to save in Excel,PDF.

    Read the article

  • Is this iptables NAT exploitable from the external side?

    - by Karma Fusebox
    Could you please have a short look on this simple iptables/NAT-Setup, I believe it has a fairly serious security issue (due to being too simple). On this network there is one internet-connected machine (running Debian Squeeze/2.6.32-5 with iptables 1.4.8) acting as NAT/Gateway for the handful of clients in 192.168/24. The machine has two NICs: eth0: internet-faced eth1: LAN-faced, 192.168.0.1, the default GW for 192.168/24 Routing table is two-NICs-default without manual changes: Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 (externalNet) 0.0.0.0 255.255.252.0 U 0 0 0 eth0 0.0.0.0 (externalGW) 0.0.0.0 UG 0 0 0 eth0 The NAT is then enabled only and merely by these actions, there are no more iptables rules: echo 1 > /proc/sys/net/ipv4/ip_forward /sbin/iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE # (all iptables policies are ACCEPT) This does the job, but I miss several things here which I believe could be a security issue: there is no restriction about allowed source interfaces or source networks at all there is no firewalling part such as: (set policies to DROP) /sbin/iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT /sbin/iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT And thus, the questions of my sleepless nights are: Is this NAT-service available to anyone in the world who sets this machine as his default gateway? I'd say yes it is, because there is nothing indicating that an incoming external connection (via eth0) should be handled any different than an incoming internal connection (via eth1) as long as the output-interface is eth0 - and routing-wise that holds true for both external und internal clients that want to access the internet. So if I am right, anyone could use this machine as open proxy by having his packets NATted here. So please tell me if that's right or why it is not. As a "hotfix" I have added a "-s 192.168.0.0/24" option to the NAT-starting command. I would like to know if not using this option was indeed a security issue or just irrelevant thanks to some mechanism I am not aware of. As the policies are all ACCEPT, there is currently no restriction on forwarding eth1 to eth0 (internal to external). But what are the effective implications of currently NOT having the restriction that only RELATED and ESTABLISHED states are forwarded from eth0 to eth1 (external to internal)? In other words, should I rather change the policies to DROP and apply the two "firewalling" rules I mentioned above or is the lack of them not affecting security? Thanks for clarification!

    Read the article

  • Which hardware to VM ratio for Build-Server virtualization?

    - by Martin
    Let's start with saying that I'm a total noob wrt. to server virtualization. That is, I use VMs often during development, but they're simple desktop machine things for me. Now to my problem: We have two (physical) build servers, one master, one slave running Jenkins to do daily tasks and build (Visual C++ Builds) our release packages for our software. As such these machines are critical to our company, because we do lot's releases and without a controlled environment to create them, we can't ship fixes. (And currently there's no proper backup of these machines in place, because they do not hold any data as such - it just would be a major pain to setup them again should they go bust. (But setting up backup that I'd know would work in case of HW failure would even be more pain, so we have skipped that until now.)) Therefore (and for scaling purposes) we would like to go virtual with these machines. Outsourcing to the cloud is not an option, not at all, so we'll have to use on-premises hardware and VM hosts. Each Build-Server (master or slave) is a fully configured (installs, licenses, shares in case of the master, ...) Windows Server box. I would now ideally like to just convert the (two) existing physical nodes to VM images and run them. Later add more VM slave instances as clones of the existing ones. And here begin my questions: Should I go for one VM per one hardware-box or should I go for something where a single hardware runs multiple VMs? That would mean a single point of failure hardware wise and doesn't seem like a good idea ... or?? Since we're doing C++ compilation with Visual Studio, I assume that during a build the hardware (processor cores + disk) will be fully utilized, so going with more than one build-node per hardware doesn't seem to make much sense?? Wrt. to hardware options, does it make any difference which VM software we use (VMWare, MS, Virtualbox, ... ?) (We're using Windows exclusively for our builds.) Regarding budget: We have a normal small company (20 developers) budget for this. ;-) That is, if it's going to cost a few k$ it's going to cost. If it's free - the better. I strongly prefer solutions where there's no multi-k$ maintenance costs per year.

    Read the article

  • Apache certificates for some urls not working

    - by Vegaasen
    We are having a rather strange problem with a Apache-installation. Here is a short summary: Currently I'm setting up Apache with https, and server-certificates. This is fairly easy and works straight out of the box - as expected. This is the configuration for this setup: Listen 443 SSLEngine on SSLCertificateFile "/progs/apache/ssl/example-site.no.pem" SSLCertificateKeyFile "/progs/apache/ssl/example-site.no.key" SSLCACertificateFile "/progs/apache/ssl/ca/example_root.pem" SSLCADNRequestFile "/progs/apache/ssl/ca/example_intermediate.pem" SSLVerifyClient none SSLVerifyDepth 3 SSLOptions +StdEnvVars +ExportCertData RequestHeader set ssl-ClientCert-Subject-CN "%{SSL_CLIENT_S_DN}s" RewriteEngine On ProxyPreserveHost On ProxyRequests On SSLProxyEngine On ... <LocationMatch /secureStuff/$> SSLVerifyClient require Order deny,allow Allow from All </LocationMatch> ... <Proxy balancer://exBalancer> Header add Set-Cookie "EX_ROUTE=EB.%{BALANCER_WORKER_ROUTE}e; path=/" env=BALANCER_ROUTE_CHANGED BalancerMember http://10.0.0.1:7200 route=ee1 retry=300 flushpackets=off keepalive=on BalancerMember http://10.0.0.2:7200 route=ee2 retry=300 flushpackets=off keepalive=on status=+H ProxySet stickysession=EX_ROUTE scolonpathdelim=Off timeout=10 nofailover=off failonstatus=505 maxattempts=1 lbmethod=bybusyness Order deny,allow Allow from all </Proxy> RewriteCond %{REQUEST_URI} !^/index.html [NC] RewriteRule ^/(.*)$ balancer://exBalancer/$1 [P,NC] ProxyPassReverse / balancer://exBalancer/ Header edit Set-Cookie "(.*)" "$1;HttpsOnly" ... So - everything works fine and as expected for all of the pages that are not a part of the LocationMatch-directive. When requesting something that matches the LocationMatch-directive, I'm asked for a certificate (hence the SSLVerifyClient required attribute) - and getting all the correct certificates in my browser that is based on the root/intermediate chain. After choosing a certificate and clicking "OK", this is what pops up in the apache logs: [ssl:info] [pid 9530:tid 25] [client :43357] AH01998: Connection closed to child 86 with abortive shutdown ( [Thu Oct 11 09:27:36.221876 2012] [ssl:debug] [pid 9530:tid 25] ssl_engine_io.c(1171): (70014)End of file found: [client 10.235.128.55:45846] AH02007: SSL handshake interrupted by system [Hint: Stop button pressed in browser?!] And this just spams the logs. What is happening here? I can see this configuration working on my local machine, but not on one of our servers. There is no configration differences between the servers, only minor application-wise-changes. I've tried the following: 1) Removing CA-certificate-checking (works) 2) Adding required CA-certificate for the whole site (works) 3) Adding "SSLVerifyClient optional" does not work 4) ++ Server/Application Information Local: -OpenSSL v.1.0.1x -Apache 2.4.3 -Ubuntu -mpm: event -every configuration should be turned on (failing) server: -OpenSSL 0.9.8e -Apache 2.4.2 -SunOS -mpm: worker -every configuration should be turned on Please let me know if more information is needed, I'll provide it instantly. Brief sum-up: -Running apache 2.4 -Server certificates works just fine -Client certificates for some /Locations does not work, fails with errors PS: Could it be related with the OpenSSL version and the "Renegotiation" stuff related to TLS/SSLv3?

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • How to bind grid in ASP.NET?

    - by Abid Ali
    I cant bind my Grid. I dont know what I am doing wrong, the grid appears empty when I run the program. here is my code :: protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) this.BindGrid(this.GridView1); } private void BindGrid(GridView grid) { SqlCommand cmd = new SqlCommand("Select * from Person", cn); SqlDataAdapter da = new SqlDataAdapter(cmd); DataTable dt = new DataTable(); da.Fill(dt); grid.DataSource = dt; grid.DataBind(); } <body> <form id="form1" runat="server"> <div style="margin-left: 240px"> <asp:GridView ID="GridView1" runat="server" CellPadding="4" ForeColor="#333333" GridLines="None" Width="856px" AutoGenerateColumns = "false" ShowFooter = "true" ShowHeader="true" BorderStyle="Groove" CaptionAlign="Top" HorizontalAlign="Center" onrowdatabound="GridView1_RowDataBound" > <RowStyle BackColor="#F7F6F3" ForeColor="#333333" /> <Columns> <asp:BoundField HeaderText="ID" /> <asp:BoundField HeaderText="First Name" /> <asp:BoundField HeaderText="Last Name" /> <asp:BoundField HeaderText="Home Phone #" /> <asp:BoundField HeaderText="Cell #" /> <asp:BoundField HeaderText="Email Address" /> <asp:BoundField HeaderText="NIC #" /> <asp:TemplateField HeaderText="Action"> <ItemTemplate> <asp:Button ID="Button1" runat="server" Text="Button" /> <asp:Button ID="Button2" runat="server" Text="Button" /> </ItemTemplate> </asp:TemplateField> </Columns> <FooterStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <PagerStyle BackColor="#284775" ForeColor="White" HorizontalAlign="Center" /> <SelectedRowStyle BackColor="#E2DED6" Font-Bold="True" ForeColor="#333333" /> <HeaderStyle BackColor="#5D7B9D" Font-Bold="True" ForeColor="White" /> <EditRowStyle BackColor="#999999" /> <AlternatingRowStyle BackColor="White" ForeColor="#284775" /> </asp:GridView> </div> </form> </body>

    Read the article

  • Best WordPress Video Themes for a Video Blog

    - by Matt
    WordPress has made blogging so easy & fun, there are plenty of video blog themes that you can pick from. However there is always rarity in quality. We at JustSkins have gathered some high quality, tested, tried video themes list. We tried to find some WordPress themes for vloggers, we knew all along that there are very few yet some of them are just brilliant premium wordpress themes. More on that later, let’s find out some themes which you can install on your vlog right now. On Demand 2.0 A fully featured video WordPress premium theme from Press75. Includes  theme options panel for personal customization and content management options, post thumbnails, drop down navigation menu, custom widgets and lots more. Demo | Price: $75 | DOWNLOAD VideoZoom An outstanding premium WordPress video theme from WPZoom featuring standard video integration plus additionally it lets you play any video from all the popular video websites. VideoZoom theme also includes a featured video slider on the homepage, multiple post layout options, theme options panel, WordPress 3.0 menus, backgrounds etc. Demo | Price Single: $69, Developer: $149 | DOWNLOAD Vidley Press75′s easy to use premium WordPress video theme. This theme is full of great features, it can be a perfect choice if you intend to make it a portal someday..it is scalable to shape like a news portal or portfolios. The Theme is widget ready. It has ability to place Featured Content and Featured Category section on homepage. The drop down menus on this theme are nifty! Demo | Price $75 |  DOWNLOAD Live A video premium WordPress theme designed for streaming video, and live event broadcasting. You can embed live video broadcasts from third party services like Ustream etc, and features a prominent timer counting down to the next broadcast, rotating bumper images, Facebook and twitter integration for viewer interaction, theme admin options panel and more make this theme one of its kind. Demo | Price: $99, Support License: $149| DOWNLOAD Groovy Video Woo Themes is pioneer in making beautiful wordpress themes,  One such theme that is built by keeping the video blogger in mind. The Groovy Theme is very colourful video blog premium WordPress theme. Creating video posts is quick and easy with just a copy / paste of the video’s embed code. The theme enables automatic video resizing, plenty of widgets. Also allows you to pick color of your choice. Price: Single Use $70, Developer Price : $150 | DOWNLOAD Video Flick Another exciting Video blogging theme by Press75 is the Video Flick theme. Video Flick is compatible with any video service that provides embed code, or if you want to host your own videos, Video Flick is also compatible with FLV (Flash Video) and Quicktime formats. This theme allows you to either keep standard Blog and/or have Video posts. You can pick a light or dark color option. Demo | Price : $75 | DOWNLOAD Woo Tube An excellent video premium WordPress theme from Woothemes, the WooTube theme is a very easy video blog platform, as it comes with  automatic video resizing, a completely widgetised sidebar and 7 different colour schemes to choose from. The theme  has the ability to be used as a normal blog or a gallery. A very wise choice! Price: Single Use $70, Developer Price : $150 | DOWNLOAD eVid Theme One of the nicest WordPress theme designed specifically for the video bloggers. Simple to integrate videos from video hosts such as Youtube, Vimeo, Veoh, MetaCafe etc. Demo | Price: $19 | DOWNLOAD Tubular A video premium WordPress theme from StudioPress which can also be used as a used a simple website or a blog. The theme is also available in a light color version. Demo | Price: $59.95 | DOWNLOAD Video Elements 2.0 Another beautiful video premium WordPress theme from Press75. Video Elements 2.0 has been re-designed to include the features you need to easily run and maintain a video blog on WordPress. Demo | Price: $75 | DOWNLOAD TV Elements 3.0 The theme includes a featured video carousel on the homepage which can display any number of videos, a featured category section which displays up to 12 channels, creates automatic thumbnails and a lots more… Demo | Price: $75 | DOWNLOAD Wave A beautiful premium video wordpress theme, Flexible & Super cool looking. The Design has very earthy feel to it. The theme has featured video area & latest listing on the homepage. All in all a simple design no fancy features. Demo | Price: $35 | Download

    Read the article

  • Developer Training – A Conclusive Summary- Part 5

    - by pinaldave
    Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 We have now reached the end of our series about developer training.  I hope you have come away thinking that training is the best way to advance in your company and that you are looking for training opportunities right now.  If you’re still not convinced here are a few things to keep in mind:  Training benefits the employer and the employee. A well trained employee is a happy employee, and a happy employee is more efficient and productive. Training an employee might be expensive, but it is less expensive than hiring a new person. Whether you are looking at him from the employee’s or the company’s point of view, there are always advantages to training. A Broader View This series is definitely written for Developer Training but it is not limited to developers only. There are IT Pro, System Admins, DBAs as well many other technology professionals; this article series is for all professionals in the world. The concepts and take away will remain common across all the platform and regardless of technology affiliation. Pass the Knowledge If I have to pick one advise which is extremely important related to training, I will pick – pass the knowledge. Once you have decided in favor of training, there is more to it than simply showing up and staying awake.  It is always a good idea to take notes – at the very least it will help you stay awake, but they will often serve as a good way to remember your training when you go back to work.  You can also use them to pass your new knowledge on to fellow employees, which can be very fun and rewarding. Right Place, Right Time and Right Training There are so many ways to get developer training.  In-person and on the job training is easy to come by and is the most usual type of training, but don’t overlook my favorite type of training: On Demand.  Being able to learn at your own pace, own place and on your own time will make training a realistic goal for almost every employee. I can think of nothing more important in life than furthering your education.  Especially when you work in a field that is constantly changing – like technology.  Whether you like it or not, training is incredibly important.  That is why I feel it is so important to receive training.  And because there are so many different training formats – live, online, through books, through people – I am certain that we all can find a way to be trained that best suits our goals and personalities. The Teacher Within If you think of anyone who is a master of the technology field or an incredibly successful developer (the obvious examples that spring to mind are Steve Jobs or Bill Gates), you will also find a teacher.  Both these individuals spent their lives developing better technology, but also educating other developers and the public about how to use these technologies and how it can change your life for the better.  I think that we all should strive to be like these wonderful teachers.  We might not be able to change the world, but we can certainly change a few lives around us. Even if we never turn into trainers ourselves , being trained as a student can be a good exercise.  We learn a lot and become better employees – and it would not be a stretch to say that this makes us better individuals, as well. Final Say I think learning and growing in your chosen field is not only a good idea, career-wise, but can be fun, too!  I for one never feel more alive than when I am learning about something I am really passionate about.  I think my job title – technology evangelist – explains how enthusiastic I am about this subject.  But please don’t think that I am thinking of this as someone who wants to train and educate others (although this is also one of my passions).  I am also a passionate student.  I enjoy learning new things and am always on the lookout for new ways to learn and new people to learn from. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL Developer at Oracle Open World 2012

    - by thatjeffsmith
    We have a lot going on in San Francisco this fall. One of the most personal exciting bits, for what will be my 4th or 5th Open World, is that this will be my FIRST as a member of Team Oracle. I’ve presented once before, but most years it was just me pressing flesh at the vendor booths. After 3-4 days of standing and talking, you’re ready to just go home and not do anything for a few weeks. This time I’ll have a chance to walk around and talk with our users and get a good idea of what’s working and what’s not. Of course it will be a great opportunity for you to find us and get to know your SQL Developer team! 3.4 miles across and back – thanks Ashley for signing me up for the run! This year is going to be a bit crazy. Work wise I’ll be presenting twice, working a booth, and proctoring several of our Hands-On Labs. The fun parts will be equally crazy though – running across the Bay Bridge (I don’t run), swimming the Bay (I don’t swim), having my wife fly out on Wednesday for the concert, and then our first WhiskyFest on Friday (I do drink whisky though.) But back to work – let’s talk about EVERYTHING you can expect from the SQL Developer team. Booth Hours We’ll have 2 ‘demo pods’ in the Exhibition Hall over at Moscone South. Look for the farm of Oracle booths, we’ll be there under the signs that say ‘SQL Developer.’ There will be several people on hand, mostly developers (yes, they still count as people), who can answer your questions or demo the latest features. Come by and say ‘Hi!’, and let us know what you like and what you think we can do better. Seriously. Monday 10AM – 6PM Tuesday 9:45AM – 6PM Wednesday 9:45AM – 4PM Presentations Stop by for an hour, pull up a chair, sit back and soak in all the SQL Developer goodness. You’ll only have to suffer my bad jokes for two of the presentations, so please at least try to come to the other ones. We’ll be talking about data modeling, migrations, source control, and new features in versions 3.1 and 3.2 of SQL Developer and SQL Developer Data Modeler. Day Time Event Monday 10:454:45 What’s New in SQL Developer Why Move to Oracle Application Express Listener Tueday 10:1511:455:00 Using Subversion in Oracle SQL Developer Data Modeler Oracle SQL Developer Tips & Tricks Database Design with Oracle SQL Developer Data Modeler Wednesday 11:453:30 Migrating Third-Party Databases and Applications to Oracle Exadata 11g Enterprise Options and Management Packs for Developers Hands On Labs (HOLs) The Hands On Labs allow you to come into a classroom environment, sit down at a computer, and run through some exercises. We’ll provide the hardware, software, and training materials. It’s self-paced, but we’ll have several helpers walking around to answer questions and chat up any SQL Developer or database topic that comes to mind. If your employer is sending you to Open World for all that great training, the HOLs are a great opportunity to capitalize on that. They are only 60 minutes each, so you don’t have to worry about burning out. And there’s no homework! Of course, if you do want to take the labs home with you, many are already available via the Developer Day Hands-On Database Applications Developer Lab. You will need your own computer for those, but we’ll take care of the rest. Wednesday PL/SQL Development and Unit Testing with Oracle SQL Developer 10:15 Performance Tuning with Oracle SQL Developer 11:45 Thursday The Soup to Nuts of Data Modeling with Oracle SQL Developer Data Modeler 11:15 Some Parting Advice Always wanted to meet your favorite Oracle authors, speakers, and thought-leaders? Don’t be shy, walk right up to them and introduce yourself. Normal social rules still apply, but at the conference everyone is open and up for meeting and talking with attendees. Just understand if there’s a line that you might only get a minute or two. It’s a LONG conference though, so you’ll have plenty of time to catch up with everyone. If you’re going to be around on Tuesday evening, head on over to the OTN Lounge from 4:30 to 6:30 and hang out for our Tweet Meet. That’s right, all the Oracle nerds on Twitter will be there in one place. Be sure to put your Twitter handle on your name tag so we know who you are!

    Read the article

  • Using CMS for App Configuration - Part 1, Deploying Umbraco

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2014/06/04/using-cms-for-app-configurationndashpart-1-deploy-umbraco.aspxSince my last post on using CMS for semi-static API content, How about a new platform for your next API… a CMS?, I’ve been using the idea for centralized app configuration, and this post is the first in a series that will walk through how to do that, step-by-step. The approach gives you a platform-independent, easily configurable way to specify your application configuration for different environments, with a built-in approval workflow, change auditing and the ability to easily rollback to previous settings. It’s like Azure Web and Worker Roles where you can specify settings that change at runtime, but it's not specific to Azure - you can use it for any app that needs changeable config, provided it can access the Internet. The series breaks down into four posts: Deploying Umbraco – the CMS that will store your configurable settings and the current values; Publishing your config – create a document type that encapsulates your settings and a template to expose them as JSON; Consuming your config – in .NET, a simple client that uses dynamic objects to access settings; Config lifecycle management – how to publish, audit, and rollback settings. Let’s get started. Deploying Umbraco There’s an Umbraco package on Azure Websites, so deploying your own instance is easy – but there are a couple of things to watch out for, so this step-by-step will put you in a good place. Create From Gallery The easiest way to get started is with an Azure subscription, navigate to add a new Website and then Create From Gallery. Under CMS, you’ll see an Umbraco package (currently at version 7.1.3): Configure Your App For high availability and scale, you’ll want your CMS on separate kit from anything else you have in Azure, so in the configuration of Umbraco I’d create a new SQL Azure database – which Umbraco will use to store all its content: You can use the free 20mb database option if you don’t have demanding NFRs, or if you’re just experimenting. You’ll need to specify a password for a SQL Server account which the Umbraco service will use, and changing from the default username umbracouser is probably wise. Specify Database Settings You can create a new database on an existing server if you have one, or create new. If you create a new server *do not* use the same username for the database server login as you used for the Umbraco account. If you do, the deployment will fail later. Think of this as the SQL Admin account that you can use for managing the db, the previous account was the service account Umbraco uses to connect. Make Tea If you have a fast kettle. It takes about two minutes for Azure to create and provision the website and the database. Install Umbraco So far we’ve deployed an empty instance of Umbraco using the Azure package, and now we need to browse to the site and complete installation. My Website was called my-app-config, so to complete installation I browse to http://my-app-config.azurewebsites.net:   Enter the credentials you want to use to login – this account will have full admin rights to the Umbraco instance. Note that between deploying your new Umbraco instance and completing installation in this step, anyone can browse to your website and complete the installation themselves with their own credentials, if they know the URL. Remote possibility, but it’s there. From this page *do not* click the big green Install button. If you do, Umbraco will configure itself with a local SQL Server CE database (.sdf file on the Web server), and ignore the SQL Azure database you’ve carefully provisioned and may be paying for. Instead, click on the Customize link and: Configure Your Database You need to enter your SQL Azure database details here, so you’ll have to get the server name from the Azure Management Console. You don’t need to explicitly grant access to your Umbraco website for the database though. Click Continue and you’ll be offered a “starter” website to install: If you don’t know Umbraco at all (but you are familiar with ASP.NET MVC) then a starter website is worthwhile to see how it all hangs together. But after a while you’ll have a bunch of artifacts in your CMS that you don’t want and you’ll have to work out which you can safely delete. So I’d click “No thanks, I do not want to install a starter website” and give yourself a clean Umbraco install. When it completes, the installation will log you in to the welcome screen for managing Umbraco – which you can access from http://my-app-config.azurewebsites.net/umbraco: That’s It Easy. Umbraco is installed, using a dedicated SQL Azure instance that you can separately scale, sync and backup, and ready for your content. In the next post, we’ll define what our app config looks like, and publish some settings for the dev environment.

    Read the article

  • Building InstallShield based Installers using Team Build 2010

    - by jehan
    Last few weeks, I have been working on Application Packaging stuff using all the widely used tools like InstallShield, WISE, WiX and Visual Studio Installer. So, I thought it would be good to post about how to Build the Installers developed using these tools with Team Build 2010. This post will focus on how to build the InstallShield generated packages using Team Build 2010. For the release of VS2010, Microsoft has partnered with Flexera who are the makers of InstallShield to create InstallShield Limited Edition, especially for the customers of Visual Studio. First Microsoft planned to release WiX (Windows Installer Xml) with VS2010, but later Microsoft dropped  WiX from VS2010 due to reasons which are best known to them and partnered with InstallShield for Limited Edition. It disappointed lot of people because InstallShield Limited Edition provides only few features of InstallShield and it may not feasable to build complex installer packages using this and it also requires License, where as WiX is an open source with no license costs and it has proved efficient in building most complex packages. Only the last three features are available in InstallShield Limited Edition from the total features offered by InstallShield as shown in below list.                                                                                            Feature Limited Edition for Visual Studio 2010 Standalone Build System Maintain a clean build machine by using only the part of InstallShield that compiles the installations. InstallShield Best Practices Validation Suite Avoid common installation issues. Try and Die Functionality RCreate a fully functional trial version of your product. InstallShield Repackager Create Windows Installer setups from any legacy installation. Multilingual Support Present installation text in up to 35 languages. Microsoft App-V™ Support Deploy your applications as App-V virtual packages that run without conflict. Industry-Standard InstallScript Achieve maximum flexibility in your installations. Dialog Editor Modify the layout of existing end-user dialogs, create new custom dialogs, and more. Patch Creation Build updates and patches for your products. Setup Prerequisite Editor Easily control prerequisite restart behavior and source locations. String Editor View Control the localizable text strings displayed at run time with this spreadsheet-like table. Text File Changes View Configure search-and-replace actions for content in text files to be modified at run time. Virtual Machine Detection Block your installations from running on virtual machines. Unicode Support Improve multi-language installation development. Support for 64-Bit COM Extraction Extract COM data from a 64-bit COM server. Windows Installer Installation Chaining Add MSI packages to your main installation and chain them together. XML Support Save time by quickly testing XML configuration changes to installation projects. Billboard Support for Custom Branding Display Adobe Flash billboards and other graphic files during the install process. SaaS Support (IIS 7 and SSL Technologies) Easily deploy Windows-based Web applications. Project Assistant Jumpstart a project by using a simplified set of views. Support for Digital Signatures Save time by digitally signing all your files at build time. Easily Run Custom Actions Schedule a custom action to run at precisely the right moment in your installation. Installation Prerequisites Check for and install prerequisites before your installation is executed. To create a InstallShield project in Visual Studio and Build it using Team Build 2010, first you have to add the InstallShield Project template  to your Solution file. If you want to use InstallShield Limited edition you can add it from FileàNewà project àother Project Types àSetup and Deploymentà InstallShield LE and if you are using other versions of InstallShield, then you have to add it from  from FileàNewà project àInstallShield Projects. Here, I’m using  InstallShield 2011 Premier edition as I already have it Installed. I have created a simple package for TailSpin Application which has a Feature called Web, few components and a IIS Web Site for  TailSpin application.   Before started working on this, I thought I may need to build the package by calling invoke process activity in build process template or have to create a new custom activity. But, it got build without any changes to build process template. But, it was failing with below error message. C:\Program Files (x86)\MSBuild\InstallShield\2011\InstallShield.targets (68): The "InstallShield.Tasks.InstallShield" task could not be loaded from the assembly C:\Program Files (x86)\MSBuild\InstallShield\2010Limited\InstallShield.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files(x86)\MSBuild\InstallShield\2011\InstallShield.Tasks.dll' or one of its dependencies. An attempt was made to load a program with an incorrect format. Confirm that the <UsingTask> declaration is correct, that the assembly and all its dependencies are available, and that the task contains a public class that implements Microsoft.Build.Framework.ITask. This error is due to 64-bit build machine which I’m using. This issue will be replicable if you are queuing a build on a 64-bit build machine. To avoid this you have to ensure that you configured the build definition for your InstallShield project to load the InstallShield.Tasks.dll file (which is a 32-bit file); otherwise, you will encounter this build error informing you that the InstallShield.Tasks.dll file could not be loaded. To select the 32-bit version of MSBuild, click the Process tab of your build definition in Team Explorer. Then, under the Advanced node, find the MSBuild Platform setting, and select x86. Note that if you are using a 32-bit build machine, you can select either Auto or x86 for the MSBuild Platform setting.  Once I did above changes, the build got successful.

    Read the article

  • Using the Static Code Analysis feature of Visual Studio (Premium/Ultimate) to find memory leakage problems

    - by terje
    Memory for managed code is handled by the garbage collector, but if you use any kind of unmanaged code, like native resources of any kind, open files, streams and window handles, your application may leak memory if these are not properly handled.  To handle such resources the classes that own these in your application should implement the IDisposable interface, and preferably implement it according to the pattern described for that interface. When you suspect a memory leak, the immediate impulse would be to start up a memory profiler and start digging into that.   However, before you follow that impulse, do a Static Code Analysis run with a ruleset tuned to finding possible memory leaks in your code.  If you get any warnings from this, fix them before you go on with the profiling. How to use a ruleset In Visual Studio 2010 (Premium and Ultimate editions) you can define your own rulesets containing a list of Static Code Analysis checks.   I have defined the memory checks as shown in the lists below as ruleset files, which can be downloaded – see bottom of this post.  When you get them, you can easily attach them to every project in your solution using the Solution Properties dialog. Right click the solution, and choose Properties at the bottom, or use the Analyze menu and choose “Configure Code Analysis for Solution”: In this dialog you can now choose the Memorycheck ruleset for every project you want to investigate.  Pressing Apply or Ok opens every project file and changes the projects code analysis ruleset to the one we have specified here. How to define your own ruleset  (skip this if you just download my predefined rulesets) If you want to define the ruleset yourself, open the properties on any project, choose Code Analysis tab near the bottom, choose any ruleset in the drop box and press Open Clear out all the rules by selecting “Source Rule Sets” in the Group By box, and unselect the box Change the Group By box to ID, and select the checks you want to include from the lists below. Note that you can change the action for each check to either warning, error or none, none being the same as unchecking the check.   Now go to the properties window and set a new name and description for your ruleset. Then save (File/Save as) the ruleset using the new name as its name, and use it for your projects as detailed above. It can also be wise to add the ruleset to your solution as a solution item. That way it’s there if you want to enable Code Analysis in some of your TFS builds.   Running the code analysis In Visual Studio 2010 you can either do your code analysis project by project using the context menu in the solution explorer and choose “Run Code Analysis”, you can define a new solution configuration, call it for example Debug (Code Analysis), in for each project here enable the Enable Code Analysis on Build   In Visual Studio Dev-11 it is all much simpler, just go to the Solution root in the Solution explorer, right click and choose “Run code analysis on solution”.     The ruleset checks The following list is the essential and critical memory checks.  CheckID Message Can be ignored ? Link to description with fix suggestions CA1001 Types that own disposable fields should be disposable No  http://msdn.microsoft.com/en-us/library/ms182172.aspx CA1049 Types that own native resources should be disposable Only if the pointers assumed to point to unmanaged resources point to something else  http://msdn.microsoft.com/en-us/library/ms182173.aspx CA1063 Implement IDisposable correctly No  http://msdn.microsoft.com/en-us/library/ms244737.aspx CA2000 Dispose objects before losing scope No  http://msdn.microsoft.com/en-us/library/ms182289.aspx CA2115 1 Call GC.KeepAlive when using native resources See description  http://msdn.microsoft.com/en-us/library/ms182300.aspx CA2213 Disposable fields should be disposed If you are not responsible for release, of if Dispose occurs at deeper level  http://msdn.microsoft.com/en-us/library/ms182328.aspx CA2215 Dispose methods should call base class dispose Only if call to base happens at deeper calling level  http://msdn.microsoft.com/en-us/library/ms182330.aspx CA2216 Disposable types should declare a finalizer Only if type does not implement IDisposable for the purpose of releasing unmanaged resources  http://msdn.microsoft.com/en-us/library/ms182329.aspx CA2220 Finalizers should call base class finalizers No  http://msdn.microsoft.com/en-us/library/ms182341.aspx Notes: 1) Does not result in memory leak, but may cause the application to crash   The list below is a set of optional checks that may be enabled for your ruleset, because the issues these points too often happen as a result of attempting to fix up the warnings from the first set.   ID Message Type of fault Can be ignored ? Link to description with fix suggestions CA1060 Move P/invokes to NativeMethods class Security No http://msdn.microsoft.com/en-us/library/ms182161.aspx CA1816 Call GC.SuppressFinalize correctly Performance Sometimes, see description http://msdn.microsoft.com/en-us/library/ms182269.aspx CA1821 Remove empty finalizers Performance No http://msdn.microsoft.com/en-us/library/bb264476.aspx CA2004 Remove calls to GC.KeepAlive Performance and maintainability Only if not technically correct to convert to SafeHandle http://msdn.microsoft.com/en-us/library/ms182293.aspx CA2006 Use SafeHandle to encapsulate native resources Security No http://msdn.microsoft.com/en-us/library/ms182294.aspx CA2202 Do not dispose of objects multiple times Exception (System.ObjectDisposedException) No http://msdn.microsoft.com/en-us/library/ms182334.aspx CA2205 Use managed equivalents of Win32 API Maintainability and complexity Only if the replace doesn’t provide needed functionality http://msdn.microsoft.com/en-us/library/ms182365.aspx CA2221 Finalizers should be protected Incorrect implementation, only possible in MSIL coding No http://msdn.microsoft.com/en-us/library/ms182340.aspx   Downloadable ruleset definitions I have defined three rulesets, one called Inmeta.Memorycheck with the rules in the first list above, and Inmeta.Memorycheck.Optionals containing the rules in the second list, and the last one called Inmeta.Memorycheck.All containing the sum of the two first ones.  All three rulesets can be found in the  zip archive  “Inmeta.Memorycheck” downloadable from here.   Links to some other resources relevant to Static Code Analysis MSDN Magazine Article by Mickey Gousset on Static Code Analysis in VS2010 MSDN :  Analyzing Managed Code Quality by Using Code Analysis, root of the documentation for this Preventing generated code from being analyzed using attributes Online training course on Using Code Analysis with VS2010 Blogpost by Tatham Oddie on custom code analysis rules How to write custom rules, from Microsoft Code Analysis Team Blog Microsoft Code Analysis Team Blog

    Read the article

  • Advice on learning programming languages and math.

    - by Joris Ooms
    I feel like I'm getting stuck lately when it comes to learning about programming-related things; I thought I'd ask a question here and write it all down in the hope to get some pointers/advice from people. Perhaps writing it down helps me put things in perspective for myself aswell. I study Interactive Multimedia Design. This course is based on two things: graphic design on one hand, and web development on the other hand. I have quite a decent knowledge of web-related languages (the usual HTML/JS/PHP) and I'll be getting a course on ASP.NET next year. In my free time, I have learnt how to work with CodeIgniter, aswell as some diving into Ruby (and Rails) and basic iOS programming. In my first year of college I also did a class on Java (19/20 on the end result). This grade doesn't really mean anything though; I have the basics of OOP down but Java-wise, we learnt next to nothing. Considering the time I have been programming in, for example, PHP.. I can't say I'm bad at it. I'm definitely not good or great at it, but I'm decent. My teachers tell me I have the programming thing down. They just tell me I should keep on learning. So that's what I do, and I try to take in as much as possible; however, sometimes I'm unsure where to start and I have this tendency to always doubt myself. Now, for the 'question'. I want to get into iOS programming. I know iOS programming boils down to programming in Cocoa Touch and Objective-C. I also know Obj-C is a superset of C. I have done a class on C a couple of years ago, but I failed miserably. I got stuck at pointers and never really understood them.. Until like a month ago. I suddenly 'got' it. I have been working through a book on Objective-C for a week or so now, and I understand the basics (I'm at like.. chapter 6 or so). However, I keep running into similar problems as the ones I had when I did the C class: I suck at math. No, really. I come from a Latin-Modern Languages background in high school and I had nearly no math classes back then. I wanted to study Computer Science, but I failed there because of the miserable state of my mathematics knowledge. I can't explain why I'm suddenly talking about math here though, because it isn't directly related to programming.. yet it is. For example, the examples in the book I'm reading now are about programming a fraction-calculator. All good, I can do the programming when I get the formulas down.. but it takes me a full day or more to actually get to that point. I also find it hard to come up with ideas for myself. I made one small iOS app the other day and it's just a button / label kind of thing. When I press the button, it generates a random number. That's really all I could come up with. Can you 'learn' that? It probably comes down to creativity, but evidently, I'm not too great at being creative. Are there any sites or resources out there that provide something like a basic list of things you can program when you're just starting out? Maybe I'm focusing on too many things at once. I want to keep my HTML/CSS at a decent level, while learning PHP and CodeIgniter, while diving into Ruby on Rails and learning Objective-C and the iOS SDK at the same time. I just want to be good at something, I guess. The problem is that I can't seem to be happy with my PHP stuff. I want more, something 'harder'; that's why I decided to pick up the iOS thing. Like I said, I have the basics down of a lot of different languages. I can program something simple in Java, in C, in Objective-C as of this week.. but it ends there. Mostly because I can't come up with ideas for more complex applications, and also because I just doubt myself: 'Oh, that's too complex, I can never do that'. And then it ends there. To conclude my rant, let me basically rephrase my questions into a 'tl;dr' part. A. I want to get into iOS programming and I have basic knowledge of C/Objective-C. However, I struggle to come up with ideas of my own and implement them and I also suck at math which is something that isn't directly related to, yet often needed while programming. What can I do? B. I have an interest in a lot of different programming languages and I can't stop reading/learning. However, I don't feel like I'm good in anything. Should I perhaps focus on just one language for a year or longer, or keep taking it all in at the same time and hope I'll finally get them all down? C. Are there any resources out there that provide basic ideas of things I can program? I'm thinking about 'simple' command-line applications here to help me while studying C/Obj-C away from the whole iPhone SDK. Like I said, the examples in my book are mainly math-based (fraction calculator) and it's kinda hard. :( Thanks a lot for reading my post. I didn't plan it to be this long but oh well. Thanks in advance for any answers.

    Read the article

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • Center Pictures and Other Objects in Office 2007 & 2010

    - by Matthew Guay
    Sometimes it can be difficult to center a picture in a document just by dragging it dragging it around. Today we show you how to center pictures, images, and other objects perfectly in Word and PowerPoint. Note: For this tutorial we’re using Office 2010, but the steps are nearly identical in 2007. Centering a Picture in Word First let’s insert a picture into our document.  Click the Insert tab, and then click Picture. Once you select the picture you want, it will be added to your document.  Usually, pictures are added wherever your curser was in the document, so in a blank document it will be added at the top left. Also notice Picture Tools show up in the Ribbon after inserting an image. Note: The following menu items are available in Picture Tools Format tab which is displayed when you select the object or image you’re working with. How do we align the picture just like we want?  Click Position to get some quick placement options, including centered in the middle of the document or on the top.    However, for more advanced placement, we can use the Align tool.  If Word isn’t maximized, you may only see the icon without the “Align” label. Notice the tools were grayed out in the menu by default.  To be able to change the Alignment, we need to first change the text wrap settings. Click the Wrap Text button, and any option other than “In Line with Text”.  Your choice will depend on the document you’re writing, just choose the option that works best in the document.   Now, select the Align tools again.  You can now position your image precisely with these options. Align Center will position your picture in the center of the page widthwise. Align Middle will put the picture in the middle of the page height-wise. This works the same with textboxes.  Simply click the Align button in the Format tab, and you can center it in the page. And if you’d like to align several objects together, simply select them all, click Group, and then select Group from the menu.   Now, in the align tools, you can center the whole group on your page for a heading, or whatever you want to use the pictures for. These steps also work the same with Office 2007. Center objects in PowerPoint This works similar in PowerPoint, except that pictures are automatically set for square wrapping automatically, so you don’t have to change anything.  Simply insert the picture or other object of your choice, click Align, and choose the option you want. Additionally, if one object is already aligned like you want, drag another object near it and you will see a Smart Guide to help you align or center the second object with the first.  This only works with shapes in PowerPoint 2010 beta, but will work with pictures, textboxes, and media in the final release this summer. Conclusion These are good methods for centering images and objects in Word and PowerPoint.  From designing perfect headers to emphasizing your message in a PowerPoint presentation, this is something we’ve found useful and hope you will too. Since we’re talking about Office here, it’s worth mentioning that Microsoft has announced the Technology Guarantee Program for Office 2010. Essentially what this means is, if you purchase a version of Office 2007 between March 5th and September 30th of this year, when Office 2010 is released you’ll be able to upgrade to it for free! Similar Articles Productive Geek Tips Add or Remove Apps from the Microsoft Office 2007 or 2010 SuiteAdd More Functions To Office 2007 By Installing Add-InsCustomize Your Welcome Picture Choices in Windows VistaEasily Rotate Pictures In Word 2007Add Effects To Your Pictures in Word 2007 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Discover New Bundled Feeds in Google Reader Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox)

    Read the article

  • BIP and Mapviewer Mash Up I

    - by Tim Dexter
    I was out in Yellowstone last week soaking up various wildlife and a bit too much rain ... good to be back until the 95F heat yesterday. Taking a little break from the Excel templates; the dev folks are planing an Excel patch in the next week or so that will add a mass of new functionality. At the risk of completely mis leading you I'm going to hang back a while. What I have written so far holds true and will continue to do so. This week, I have been mostly eating 'mapviewer' ... answers on a post card please, TV show and character. I had a request to show how BIP can call mapviewer and render a dynamic map in an output. So I hit the books and colleagues for some answers. Mapviewer is Oracle's geographic information system, hereby known as GIS. I use it a lot in our BIEE demos where the interaction with the maps is very impressive. Need a map of California and its congressional districts? I have contacts; Jerry and David with their little black box of maps. Once in my possession I can build highly interactive, clickable maps that allow the user to drill into more information using a very friendly interface driving BIEE content and navigation. But what about maps in BIP output? Bryan Wise, who has written some articles on this blog did some work a while back with the PL/SQL API interface. The extract for the report called a function that in turn called the mapviewer server, passing a set of mapping requirements, it then returned a URL to a cached copy of that map. Easy to then have BIP render that image. Thats still very doable. You need to install a couple of packages and then load the mapviewer java APIs into the database. Then you can write your function to the APIs. A little involved? Maybe, but the database is doing all the heavy lifting for you. I thought I would investigate another method for getting the maps back into BIP. There is a URL interface you can call, this involves building an XML message to be passed to the mapviewer server. It's pretty straightforward to use on the mapviewer side. On the BIP side things are little more tricksy. After some unexpected messing about I finally got the ubiquitous Hello World map to render using the URL method. Not the most exciting map in the world, lots of ocean and a rather long URL to get it to render. http://127.0.0.1:9704/mapviewer/omserver?xml_request=%3Cmap_request%20title=%22Hello%20World%22%20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E Notice all of the encoding in the URL string to handle the spaces, quotes, etc. All necessary to get BIP to make the call to the mapviewer server correctly without truncating the URL if it hits a real space rather than a %20. With that in mind constructing the URL was pretty simple. I'm not going to get into the content of the URL too much, for that you need to bone up on the mapviewer XML API. Check out the home page here and the documentation here. To make the template portable I used the standard CURRENT_SERVER_URL parameter from the BIP server and declared that in my template. <?param@begin:CURRENT_SERVER_URL;'myserver'?> Ignore the 'myserver', that was just a dummy value for testing at runtime it will resolve to: 'http://yourserver:port/xmlpserver' Not quite what we need as mapviewer has its own server path, in my case I needed 'mapviewer/omserver?xml_request=' as the fixed path to the mapviewer request URL. A little concatenation and substringing later I came up with <?param@begin:mURL;concat(substring($CURRENT_SERVER_URL,1,22),'mapviewer/omserver?xml_request=')?> Thats the basic URL that I can then build on. To get the Hello World map I need to add the following: <map_request title="Hello World" datasource="cagis" format="GIF_STREAM"/> Those angle brackets were the source of my headache, BIPs XSLT engine was attempting to process them rather than just pass them. Hok Min to the rescue ... again. I owe him lunch when I get out to HQ again! To solve the problem, I needed to escape all the characters and white space and then use native XSL to assign the string to a parameter. <xsl:param xdofo:ctx="begin"name="pXML">%3Cmap_request%20title=%22Hello%20World%22 %20datasource=%22cagis%22%20format=%22GIF_STREAM%22/%3E</xsl:param> I did not need to assign it to a parameter but I felt that if I were going to do anything more serious than Hello World like plotting points of interest on the map. I would need to dynamically build the URL, so using a set of parameters or variables that I then concatenated would be easier. Now I had the initial server string and the request all I then did was combine the two using a concat: concat($mURL,$pXML) Embedding that into an image tag: <fo:external-graphic src="url({concat($mURL,$pXML)})"/> and I was done. Notice the curly braces to get the concat evaluated prior to the image call. As you will see next time, building the XML message to go onto the URL can get quite complex but I have used it with some data. Ultimately, it would be easier to build an extension to BIP to handle the data to be plotted, it would then build the XML message, call mapviewer and return a URL to the map image for BIP to render. More on that next time ...

    Read the article

  • IndyTechFest Recap

    - by Johnm
    The sun had yet to raise above the horizon on Saturday, May 22nd and I was traveling toward the location of the 2010 IndyTechFest. In my freshly awaken, and pre-coffee, state I reflected on the months that preceded this day and how quickly they slipped away. The big day had finally come and the morning dew glistened with a unique brightness that morning. What is this all about? For those who are unfamiliar with IndyTechFest, it is a regional conference held in Indianapolis and hosted by the Indianapolis .NET Developers Association (IndyNDA) and the Indianapolis Professional Association for SQL Server (IndyPASS).  The event presents multiple tracks and sessions covering subjects such as Business Intelligence,  Database Administration, .NET Development, SharePoint Development, Windows Mobile Development as well as non-Microsoft topics such as Lean and MongoDB. This year's event was the third hosting of IndyTechFest. No man is an island No event such as IndyTechFest is executed by a single person. My fellow co-founders, with their highly complementary skill sets and philanthropy make the process very enjoyable. Our amazing volunteers and their aid were indispensible. The generous financial support of our sponsors that made the event and fabulous prizes possible. The spectacular line up of speakers who came from near and far to donate their time and knowledge. Our beloved attendees who sacrificed the first sunny Saturday in weeks to expand their skill sets and network with their peers. We are deeply appreciative. Challenges in preparation With the preparation of any event comes challenges. It is these challenges that makes the process of planning an event so interesting. This year's largest challenge was the location of the event. In the past two years IndyTechFest was held at the Gene B. Glick Junior Achievement Center in Indianapolis. This facility has been the hub of the Indy technical community for many years. As the big day drew near, the facility's availability came into question due to some recent changes that had occurred with those who operated the facility. We began our search for an alternative option. Thankfully, the Marriott Indianapolis East was available, was very spacious and willing to work within the range of our budget. Within days of our event, the decision to move proved to be wise since the prior location had begun renovations to the interior. Whew! Always trust your gut. Every day it's getting better At the ending of each year, we huddle together, review the evaluations and identify an area in which the event could improve. This year's big opportunity for improvement resided in the prize give-away portion at the end of the day. In the 2008 event, admittedly, this portion was rather chaotic, rushed and disorganized. This year, we broke the drawing into two sections, of which each attendee received two tickets. The first ticket was a drawing for the mountain of books that were given away. The second ticket was a drawing for the big prizes, the 2 Xboxes, 3 laptops and iPad. We peppered the ticket drawings with gift card raffles and tossing t-shirts into the audience. If at first you don't succeed, try and try again Each year of IndyTechFest, we have offered a means for ad-hoc sessions or discussion groups to pop-up. To our disappointment it was something that never quite took off. We have always believed that this unique type of session was valuable and wanted to figure out a way to make it work for this year. A special thanks to Alan Stevens, who took on and facilitated the "open space" track and made it an official success. Share with your tweety When the attendee badges were designed we decided to place an emphasis on the attendee's Twitter account as well as the events hash-tag (#IndyTechFest) to encourage some real-time buzz during the day. At the host table we displayed a Twitter feed for all to enjoy. It was quite successful and encouraging use of social media. My badge was missing my Twitter account since it was recently changed. For those who care to follow my rather sparse tweets, my address is @johnnydata. Man, this is one long blog post! All in all it was a very successful event. It is always great to see new faces and meet old friends. The planning for the 2011 IndyTechFest will kick off very soon. We have more capacity for future growth and a truck full of great ideas. Stay tuned!

    Read the article

  • Extending Blend for Visual Studio 2013

    - by Chris Skardon
    Originally posted on: http://geekswithblogs.net/cskardon/archive/2013/11/01/extending-blend-for-visual-studio-2013.aspxSo, I got a comment yesterday on my post about Extending Blend 4 and Blend for Visual Studio 2012 asking if I knew how to get it working for Blend for Visual Studio 2013.. My initial thoughts were, just change the location to get the blend dlls from Visual Studio 11.0 to 12.0 and you’re all set, so I went to do that, only to discover that the dlls I normally reference, well – they don’t exist. So… I’ve made a presumption that the actual process of using MEF etc is still the same. I was wrong. So, the route to discovery – required DotPeek and opening a few of blends dlls.. Browsing through the Blend install directory (./Microsoft Visual Studio 12.0/Blend/) I notice the .addin files: So I decide to peek into the SketchFlow dll, then promptly remember SketchFlow is quite a big thing, and hunting through there is not ideal, luckily there is another dll using an .addin file, ‘Microsoft.Expression.Importers.Host’, so we’ll go for that instead. We can see it’s still using the ‘IPackage’ formula, but where is that sucker? Well, we just press F12 on the ‘IPackage’ bit and DotPeek takes us there, with a very handy comment at the top: // Type: Microsoft.Expression.Framework.IPackage // Assembly: Microsoft.Expression.Framework, Version=12.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a // MVID: E092EA54-4941-463C-BD74-283FD36478E2 // Assembly location: C:\Program Files (x86)\Microsoft Visual Studio 12.0\Blend\Microsoft.Expression.Framework.dll Now we know where the IPackage interface is defined, so let’s just try writing a control. Last time I did a separate dll for the control, this time I’m not, but it still works if you want to do it that way. Let’s build a control! STEP 1 Create a new WPF application Naming doesn’t matter any more! I have gone with ‘Hello2013’ (see what I did there?) STEP 2 Delete: App.Config App.xaml MainWindow.xaml We won’t be needing them STEP 3 Change your application to be a Class Library instead. (You might also want to delete the ‘vshost’ stuff in your output directory now, as they only exist for hosting the WPF app, and just cause clutter) STEP 4 Add a reference to the ‘Microsoft.Expression.Framework.dll’ (which you can find in ‘C:\Program Files\Microsoft Visual Studio 12.0\Blend’ – that’s Program Files (x86) if you’re on an x64 machine!). STEP 5 Add a User Control, I’m going with ‘Hello2013Control’, and following from last time, it’s just a TextBlock in a Grid: <UserControl x:Class="Hello2013.Hello2013Control" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <Grid> <TextBlock>Hello Blend for VS 2013</TextBlock> </Grid> </UserControl> STEP 6 Add a class to load the package – I’ve called it – yes you guessed – Hello2013Package, which will look like this: namespace Hello2013 { using Microsoft.Expression.Framework; using Microsoft.Expression.Framework.UserInterface; public class Hello2013Package : IPackage { private Hello2013Control _hello2013Control; private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); Initialize(); } private void Initialize() { _hello2013Control = new Hello2013Control(); if (_windowService.PaletteRegistry["HelloPanel"] == null) _windowService.RegisterPalette("HelloPanel", _hello2013Control, "Hello Window"); } public void Unload(){} } } You might note that compared to the 2012 version we’re no longer [Exporting(typeof(IPackage))]. The file you create in STEP 7 covers this for us. STEP 7 Add a new file called: ‘<PROJECT_OUTPUT_NAME>.addin’ – in reality you can call it anything and it’ll still read it in just fine, it’s just nicer if it all matches up, so I have ‘Hello2013.addin’. Content wise, we need to have: <?xml version="1.0" encoding="utf-8"?> <AddIn AssemblyFile="Hello2013.dll" /> obviously, replacing ‘Hello2013.dll’ with whatever your dll is called. STEP 8 We set the ‘addin’ file to be copied to the output directory: STEP 9 Build! STEP 10 Go to your output directory (./bin/debug) and copy the 3 files (Hello2013.dll, Hello2013.pdb, Hello2013.addin) and then paste into the ‘Addins’ folder in your Blend directory (C:\Program Files\Microsoft Visual Studio 12.0\Blend\Addins) STEP 11 Start Blend for Visual Studio 2013 STEP 12 Go to the ‘Window’ menu and select ‘Hello Window’ STEP 13 Marvel at your new control! Feel free to email me / comment with any problems!

    Read the article

  • ASP.NET: Using pickup directory for outgoing e-mails

    - by DigiMortal
    Sending e-mails out from web applications is very common task. When we are working on or test our systems with real e-mail addresses we don’t want recipients to receive e-mails (specially if we are using some subset of real data9. In this posting I will show you how to make ASP.NET SMTP client to write e-mails to disc instead of sending them out. SMTP settings for web application I have seen many times the code where all SMTP information is kept in app settings just to read them in code and give to SMTP client. It is not necessary because we can define all these settings under system.web => mailsettings node. If you are using web.config to keep SMTP settings then all you have to do in your code is just to create SmtpClient with empty constructor. var smtpClient = new SmtpClient(); Empty constructor means that all settings are read from web.config file. What is pickup directory? If you want drastically raise e-mail throughput of your SMTP server then it is not very wise plan to communicate with it using SMTP protocol. it adds only additional overhead to your network and SMTP server. Okay, clients make connections, send messages out and it is also overhead we can avoid. If clients write their e-mails to some folder that SMTP server can access then SMTP server has e-mail forwarding as only resource-eager task to do. File operations are way faster than communication over SMTP protocol. The directory where clients write their e-mails as files is called pickup directory. By example, Exchange server has support for pickup directories. And as there are applications with a lot of users who want e-mail notifications then .NET SMTP client supports writing e-mails to pickup directory instead of sending them out. How to configure ASP.NET SMTP to use pickup directory? Let’s say, it is more than easy. It is very easy. This is all you need. <system.net>   <mailSettings>     <smtp deliveryMethod="SpecifiedPickupDirectory">       <specifiedPickupDirectory pickupDirectoryLocation="c:\temp\maildrop\"/>     </smtp>   </mailSettings> </system.net> Now make sure you don’t miss come points: Pickup directory must physically exist because it is not created automatically. IIS (or Cassini) must have write permissions to pickup directory. Go through your code and look for hardcoded SMTP settings. Also take a look at all places in your code where you send out e-mails that there are not some custom settings used for SMTP! Also don’t forget that your mails will be written now to pickup directory and they are not sent out to recipients anymore. Advanced scenario: configuring SMTP client in code In some advanced scenarios you may need to support multiple SMTP servers. If configuration is dynamic or it is not kept in web.config you need to initialize your SmtpClient in code. This is all you need to do. var smtpClient = new SmtpClient(); smtpClient.DeliveryMethod = SmtpDeliveryMethod.SpecifiedPickupDirectory; smtpClient.PickupDirectoryLocation = pickupFolder; Easy, isn’t it? i like when advanced scenarios end up with simple and elegant solutions but not with rocket science. Note for IIS SMTP service SMTP service of IIS is also able to use pickup directory. If you have set up IIS with SMTP service you can configure your ASP.NET application to use IIS pickup folder. In this case you have to use the following setting for delivery method. SmtpDeliveryMethod.PickupDirectoryFromIis You can set this setting also in web.config file. <system.net>   <mailSettings>     <smtp deliveryMethod="PickupDirectoryFromIis" />   </mailSettings> </system.net> Conclusion Who was still using different methods to avoid sending e-mails out in development or testing environment can now remove all the bad code from application and live on mail settings of ASP.NET. It is easy to configure and you have less code to support e-mails when you use built-in e-mail features wisely.

    Read the article

  • Easiest, most fun way to program 2D games? Flash? XNA? Some other engine?

    - by Maxi
    Hi, this is a post detailing my search for the most enjoyable way for a hobbyist game programmer to sweeten his free time with making a game. My requirements: I looked at Flash first, I made a couple of small games but I'm doubtful of the performance. I would like to make a fairly large strategy game, with several hundred units fighting simultaneously, explosions and animations included. Also zoomable maps. I saw that Adobe has a new 3D API for Flash, but I don't know if that improves 2D performance aswell, I couldn't find anything related to that question on their MAX10 sessions. Would you say that Flash is a good technology for making large 2D games easily? I really like Actionscript, and I love how easy everything is in Flash. There are several engines available which make it even easier. I just do this for fun, and it would be even better if there were proper animation/particle editors available and if the engine I were to use, would be available for multiple platforms. (so more people can play my game once finished). I'd like to have it available on many mobile platforms aswell. (because I love touch input for some reason) I do know the XNA framework pretty well, but there are no good engines available for it, and it will only run on Windows, which is a huge turn off. Even bigger is, that you need to install the XNA redistributable each time you want to give the game to someone. If I use XNA, I would have to make all the tools myself, and I'd probably have to make them with WPF. (I'd love to make tools with Adobe AIR, but unfortunately the API's for image manipulation etc. are far worse in Flash, than they are in XNA/WPF.) Now, I'm aware that I could make my own engine that supports each of those platforms, but quite frankly, that would be too much work plowing through APIs. After all, I want to make a game, not an engine. So the question becomes: Is there maybe a cross platform (free or free to develop?) engine available that I could use for 2D development? I prefer: C#, Actionscript. I don't mind using c++ if the toolset is above average, but I highly doubt that there is something out there like that. Please prove me wrong :) So summary: I'd like to use Flash, but I don't know if it scales well enough. I'm not a scripter, I want some real APIs that I can work with inside a proper IDE. Just for information, I looked at several alternatives, I'm actually looking for a long time already. You'd help me a lot to make a decision finally. Feature-wise the Flatredball engine would be ideal. But I tried their tools, and quite frankly, they are horrible. Absolutely unusable, I'd need to make my own for sure. I didn't look at their API, but if their tools are so bad, I'm not inclined to look further. Unity3D. This one is quite nice, but I really don't need 3D, and it is quite ...a lot of work to learn. I also don't like that it is so expensive to use for different platforms and that I can only code for it through scripting. You have to buy each platform separately. The editor usability is average, the product overall is good enough for most purposes, but learning it myself would be overkill. Shiva 3D. It looks good enough, but again: I don't really need 3D. The editor usability is a little worse than Unity3D in my opinion and it wasn't clear to me how to start programming. I think it requires C++ for coding, so that's a negative too. I want to have fun, and c# is fun ;) SDL. Quite frankly, I'd still need to port to all those different SDL implementations. And I don't like OpenGL style programming, it's just plain ugly. And it needs c++, I know that there might be some wrappers available, but I don't like to use wrappers, because... Irrlicht. A lot of features, but support seems to be low and it is aimed at enthusiasts. C# bindings get dropped repeatedly. I'm not an engine enthusiast, I just want to make a game. I don't see this happening with Irrlicht. Ogre3D. Way too much work, it's just a graphics engine. Also no multiple platform support and c++. Torque2D. Costs something to use, and I didn't hear a lot of good things about support and documentation. Also costs extra for each platform.

    Read the article

  • boot issues - long delay, then "gave up waiting for root device"

    - by chazomaticus
    I've had this issue on and off for about two years now. I noticed it on a new (custom built) machine running 10.04 when that first came out, but then it went away until a few months ago. I've gone through a number of hard drive changes but I can't say specifically what if anything I changed hardware-wise to make it stop or start happening. I had assumed upgrading to a modern Ubuntu version would fix the issue, so I installed 12.04 beta on a spare partition last night, but it's still happening. Here's the issue. After grub loads and I select a kernel to boot, the screen goes blank save for a blinking cursor. It sits in this state for many long minutes before it finally gives up and gives me an initramfs shell with the message gave up waiting for root device (and lists the /dev/disk/by-uuid/... path it was waiting for) but no other specific diagnostic information. Now, here's the tricky part. For one, the problem is intermittent - sometimes it progresses from the blinking cursor to the Ubuntu splash boot screen in a few seconds, and once it gets that far it always continues booting fine. The really bizarre thing is that I can "force" it to "find" the root device by repeatedly pressing the space bar and hitting the machine's power button. If I tap those enough, eventually I will notice the hard drive light coming on, at which point it will always continue the boot process after a few seconds. Interestingly, if I wait slightly too long before pressing the power button (30s?), as soon as I press it I get the gave up waiting message and the initramfs shell. I've tried setting up /etc/fstab (and the grub menu.lst or whatever it's called nowadays) to use device names (e.g. /dev/sda1) instead of UUIDs, but I get the same effect just with the device name, not UUID, in the error message. I should also mention that when I boot to Windows 7, there is no issue. It boots slowly all the time just by virtue of being Windows, but it never hangs indefinitely. This would seem to indicate it's a problem in Ubuntu, not the hardware. It's pretty annoying to have to babysit the computer every time it boots. Any ideas? I'm at a loss. Not even sure how to diagnose the issue. Thanks! EDIT: Here's some dmesg output from 10.04. The 15 second gap is where it was doing nothing. I pressed the power button and space bar a few times, and the stuff at 16 seconds happened. Not sure what any of it means. [ 1.320250] scsi18 : ahci [ 1.320294] scsi19 : ahci [ 1.320320] ata19: SATA max UDMA/133 abar m8192@0xfd4fe000 port 0xfd4fe100 ir q 18 [ 1.320323] ata20: SATA max UDMA/133 abar m8192@0xfd4fe000 port 0xfd4fe180 ir q 18 [ 1.403886] usb 2-4: new high speed USB device using ehci_hcd and address 4 [ 1.562558] usb 2-4: configuration #1 chosen from 1 choice [ 16.477824] ata16: SATA link down (SStatus 0 SControl 300) [ 16.477843] ata19: SATA link down (SStatus 0 SControl 300) [ 16.477857] ata3: SATA link down (SStatus 0 SControl 300) [ 16.477895] ata15: SATA link down (SStatus 0 SControl 300) [ 16.477906] ata20: SATA link down (SStatus 0 SControl 300) [ 16.477977] ata17: SATA link down (SStatus 0 SControl 300) [ 16.478003] ata12: SATA link down (SStatus 0 SControl 300) [ 16.478046] ata13: SATA link down (SStatus 0 SControl 300) [ 16.478063] ata14: SATA link down (SStatus 0 SControl 300) [ 16.478108] ata11: SATA link down (SStatus 0 SControl 300) [ 16.478123] ata18: SATA link up 1.5 Gbps (SStatus 113 SControl 300) [ 16.478127] ata6: SATA link down (SStatus 0 SControl 300) [ 16.478157] ata5: SATA link down (SStatus 0 SControl 300) [ 16.478193] ata18.00: ATAPI: MARVELL VIRTUALL, 1.09, max UDMA/66 After that, it took its sweet time, and I had to keep hitting space bar to coax it along. Here's some more dmesg output from a little later in the boot process: [ 17.982291] input: BTC USB Multimedia Keyboard as /devices/pci0000:00/0000:00 :13.0/usb5/5-2/5-2:1.0/input/input4 [ 17.982335] generic-usb 0003:046E:5506.0002: input,hidraw1: USB HID v1.10 Key board [BTC USB Multimedia Keyboard] on usb-0000:00:13.0-2/input0 [ 18.005211] input: BTC USB Multimedia Keyboard as /devices/pci0000:00/0000:00 :13.0/usb5/5-2/5-2:1.1/input/input5 [ 18.005274] generic-usb 0003:046E:5506.0003: input,hiddev96,hidraw2: USB HID v1.10 Device [BTC USB Multimedia Keyboard] on usb-0000:00:13.0-2/input1 [ 22.484906] EXT4-fs (sda6): INFO: recovery required on readonly filesystem [ 22.484910] EXT4-fs (sda6): write access will be enabled during recovery [ 22.548542] EXT4-fs (sda6): recovery complete [ 22.549074] EXT4-fs (sda6): mounted filesystem with ordered data mode [ 32.516772] Adding 20482832k swap on /dev/sda5. Priority:-1 extents:1 across:20482832k [ 32.742540] udev: starting version 151 [ 33.002004] Bluetooth: Atheros AR30xx firmware driver ver 1.0 [ 33.008135] parport_pc 00:09: reported by Plug and Play ACPI [ 33.008186] parport0: PC-style at 0x378, irq 7 [PCSPP,TRISTATE] [ 33.012076] lp: driver loaded but no devices found [ 33.037271] ppdev: user-space parallel port driver [ 33.090256] lp0: using parport0 (interrupt-driven). Any clues in there?

    Read the article

  • Extending Expression Blend 4 &amp; Blend for Visual Studio 2012

    - by Chris Skardon
    Just getting this off the bat, I presume this will also work for Blend 5, but I can’t confirm it… Anyhews, I imagine you’re here because you want to know how to create an addin for Blend, so let’s jump right in there! First, and foremost, we’re going to need to ensure our development environment has the right setup, so the checklist: Visual Studio 2012 Blend for Visual Studio 2012 OK, let’s create a new project (class library, .NET 4.5): Hello.Extension The ‘.Extension’ bit is very very important. The addin will not work unless it is named in this way. You can put whatever you want at the front, but it has to have the extension bit. OK, so now we have a solution with one project. To this project we need to add references to the following things: Microsoft.Expression.Extensibility (from c:\program files\Microsoft Visual Studio 11.0\Blend\   -- x86 folder if you are on an x64 windows install) Microsoft.Expression.Framework (same location as above) PresentationCore PresentationFramework WindowsBase System.ComponentModel.Composition Got them? ACE. Let’s now add a project to contain our control, so, create a new WPF Application project, cunningly named something like ‘Hello.Control’… (I’m creating a WPF application here, because I’m too lazy to dig up the correct references, and this will add all the ones I need ) Once that is created, delete the App.xaml and MainWindow.xaml files, we won’t be needing them. You will also need to change the properties of the project itself, so it is only a class library. Once that is done, let’s add a new UserControl, which will be this: <UserControl x:Class="Hello.Control.HelloControl" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="300"> <Grid> <TextBlock Text="HELLO!!!"/> </Grid> </UserControl> Impressive eh? Now, let’s reference the WPF project from the Extension library. All that’s left now is to code up our extension… So, add a class to the Extension project (name wise doesn’t matter), and make it implement the IPackage interface from the Microsoft.Expression.Extensibility library: public class HelloExtension : IPackage { /**/ } We’ll implement the two methods we need to: public class HelloExtension : IPackage { public void Load(IServices services) { } public void Unload() { } } We’re only really concerned about the Load method in this case, as let’s face it, the extension we have doesn’t need to do a lot to bog off. The interesting thing about the Load method is that it receives an IServices instance. This allows us to get access to all the services that Expression provides, in this case we’re interested in one in particular, the ‘IWindowService’ So, let’s get that bad boy… private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); } Nailed it… But why? The WindowService allows us to register our UserControl with Blend, which in turn allows people to activate and see it, which is a big plus point. So, let’s do that… We’ll create an ‘Initialize’ method to create our new control, and add it to the WindowService: private HelloControl _helloControl; public void Initialize() { _helloControl = new HelloControl(); if (_windowService.PaletteRegistry["HelloPanel"] == null) _windowService.RegisterPalette("HelloPanel", _helloControl, "Hello Window"); } First we check that we’re not already registered, and if we’re not we register, the first argument is the identifier used by the service to, well, identify your extension. The second argument is the actual control, the third argument is the name that people will see in the ‘Windows’ menu of Blend itself (so important note here – don’t put anything embarrassing or (need I say it?) sweary…) There are only two things to do now - Call ‘Initialize()’ from our Load method, and Export the class This is easy money – add [Export(typeof(IPackage))] to the top of our class… The full code will (should) look like this: [Export(typeof (IPackage))] public class HelloExtension : IPackage { private HelloControl _helloControl; private IWindowService _windowService; public void Load(IServices services) { _windowService = services.GetService<IWindowService>(); Initialize(); } public void Unload() { } public void Initialize() { _helloControl = new HelloControl(); if (_windowService.PaletteRegistry["HelloControl"] == null) _windowService.RegisterPalette("HelloControl", _helloControl, "Hello Window"); } } If you build this and copy it to your ‘Extensions’ folder in Blend (c:\program files\microsoft visual studio 11.0\blend\) and start Blend, you should see ‘Hello Window’ listed in the Window menu: That as they say is it!

    Read the article

  • Delaying NIS & NFS startup till after network interface is fully ready on Fedora 17

    - by obmarg
    I've recently set up a fedora 17 server for our network, and I've been having trouble getting the NIS service to work on startup. Here's some logs from the system: Aug 21 12:57:12 cairnwell ypbind-pre-setdomain[718]: Setting NIS domain: 'indigo-nis' (environment variable) Aug 21 12:57:13 cairnwell ypbind: Binding NIS service Aug 21 12:57:13 cairnwell rpc.statd[730]: Unable to prune capability 0 from bounding set: Operation not permitted Aug 21 12:57:13 cairnwell systemd[1]: nfs-lock.service: control process exited, code=exited status=1 Aug 21 12:57:13 cairnwell systemd[1]: Unit nfs-lock.service entered failed state. Aug 21 12:57:14 cairnwell setroubleshoot: SELinux is preventing /usr/sbin/rpc.statd from using the setpcap capability. For complete SELinux messages. run sealert -l 024bba8a-b0ef-43dc-b195-5c9a2d4c4d41 Aug 21 12:57:15 cairnwell kernel: [ 18.822282] bnx2 0000:02:00.0: em1: NIC Copper Link is Up, 1000 Mbps full duplex Aug 21 12:57:15 cairnwell kernel: [ 18.822925] ADDRCONF(NETDEV_CHANGE): em1: link becomes ready Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): carrier now ON (device state 20) Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): device state change: unavailable -> disconnected (reason 'carrier-changed') [20 30 40] Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> Auto-activating connection 'System em1'. Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> Activation (em1) starting connection 'System em1' Aug 21 12:57:15 cairnwell NetworkManager[621]: <info> (em1): device state change: disconnected -> prepare (reason 'none') [30 40 0] ....... Aug 21 12:57:19 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:26 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:31 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:57:35 cairnwell sendmail[790]: YPBINDPROC_DOMAIN: Domain not bound Aug 21 12:58:00 cairnwell ypbind: Binding took 47 seconds Aug 21 12:58:00 cairnwell ypbind: NIS server for domain indigo-nis is not responding. Aug 21 12:58:01 cairnwell ypbind: Killing ypbind with PID 733. Aug 21 12:58:01 cairnwell ypbind-post-waitbind[734]: /usr/lib/ypbind/ypbind-post-waitbind: line 51: kill: SIGTERM: invalid signal specification Aug 21 12:58:01 cairnwell systemd[1]: ypbind.service: control process exited, code=exited status=1 Aug 21 12:58:01 cairnwell systemd[1]: Unit ypbind.service entered failed state. By the looks of these logs the ypbind service is starting up at 12:57:12 but the network interface isn't coming up till 12:57:15. My guess is that this is causing ypbind to time out when trying to connect. As a knock-on effect the NIS failure is causing problems with NFS which is no longer able to map UIDs properly. This problem doesn't seem to be fixed by actually starting ypbind etc. so I've had to set all my NFS shares to noauto. I have tried manually adding NETWORKDELAY and NETWORKWAIT in /etc/sysconfig/network and also running systemctl enable NetworkManager-wait-online.service as I've seen suggested in some places, but neither of these have had any effect. It is relatively easy to fix manually by restarting ypbind & mounting NFS shares after the network has started up, but it's less than ideal to have to do this every time the server has been rebooted. Does anyone know of an easy (and preferably hack-free) way of delaying the ypbind startup till after the network interface is fully ready?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80  | Next Page >