Search Results

Search found 2575 results on 103 pages for 'deep zoom'.

Page 74/103 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Why would I learn C++11, having known C and C++?

    - by Shahbaz
    I am a programmer in C and C++, although I don't stick to either language and write a mixture of the two. Sometimes having code in classes, possibly with operator overloading, or templates and the oh so great STL is obviously a better way. Sometimes use of a simple C function pointer is much much more readable and clear. So I find beauty and practicality in both languages. I don't want to get into the discussion of "If you mix them and compile with a C++ compiler, it's not a mix anymore, it's all C++" I think we all understand what I mean by mixing them. Also, I don't want to talk about C vs C++, this question is all about C++11. C++11 introduces what I think are significant changes to how C++ works, but it has introduced many special cases that change how different features behave in different circumstances, placing restrictions on multiple inheritance, adding lambda functions, etc. I know that at some point in the future, when you say C++ everyone would assume C++11. Much like when you say C nowadays, you most probably mean C99. That makes me consider learning C++11. After all, if I want to continue writing code in C++, I may at some point need to start using those features simply because my colleagues have. Take C for example. After so many years, there are still many people learning and writing code in C. Why? Because the language is good. What good means is that, it follows many of the rules to create a good programming language. So besides being powerful (which easy or hard, almost all programming languages are), C is regular and has few exceptions, if any. C++11 however, I don't think so. I'm not sure that the changes introduced in C++11 are making the language better. So the question is: Why would I learn C++11? Update: My original question in short was: "I like C++, but the new C++11 doesn't look good because of this and this and this. However, deep down something tells me I need to learn it. So, I asked this question here so that someone would help convince me to learn it." However, the zealous people here can't tolerate pointing out a flaw in their language and were not at all constructive in this manner. After the moderator edited the question, it became more like a "So, how about this new C++11?" which was not at all my question. Therefore, in a day or too I am going to delete this question if no one comes up with an actual convincing argument. P.S. If you are interested in knowing what flaws I was talking about, you can edit my question and see the previous edits.

    Read the article

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • Feedback from SQLBits 8

    - by Peter Larsson
    This years SQLBits occurred in Brighton. Although I didn’t have the opportunity to attend the full conference, I did a presentation at Saturday. Getting to Brighton was easy. Drove to Copenhagen airport at 0415, flew 0605 and arrived at Gatwick 0735. Then I took the direct train to Brighton and showed up at 0830, just one hour before presenting. This was the easy part. Getting home was much worse. Presentation ended at 1030 and I had to rush to the train station to get back to London, change to tube for Heathrow. Made it at the gate just 15 seconds before closing. That included a half mile run in the airport… Anyway, yesterday I got the feedback for my presentation. It does look good, especially since English is not my first language. This is the first graph Seems to be just halfway between conference average and best session. I can live with that. Second graph shows more detail about attendees voting. It also look acceptable. A wider spread for the 9’s, but it is an inevitable effect from how attendees percept the session. I did get a lot of 8’s and the lower grades in an descending order. The two people voting 4 and 5 didn’t say why they voted this so I don’t know how to remedy this. Third graph is about each category of votes.   Again, I find this acceptable. The Session abstract and Speaker’s knowledge seems to follow attendees expectations compared to conference average. I seem to have met the attendees expectations (and some more) for the other four categories, also compared to conference average. Since this did encourage me, I believe I will present some more at future meetings. I do have a new presentation about something all developers are doing every day but they may not know it. I will also cover this new topic in the next Deep Dives II book. Stay tuned! //Peter

    Read the article

  • A little gem from MPN&ndash;FREE online course on Architectural Guidance for Migrating Applications to Windows Azure Platform

    - by Eric Nelson
    I know a lot of technical people who work in partners (ISVs, System Integrators etc). I know that virtually none of them would think of going to the Microsoft Partner Network (MPN) learning portal to find some deep and high quality technical content. Instead they would head to MSDN, Channel 9, msdev.com etc. I am one of those people :-) Hence imagine my surprise when i stumbled upon this little gem Architectural Guidance for Migrating Applications to Windows Azure Platform (your company and hence your live id need to be a member of MPN – which is free to join). This is first class stuff – and represents about 4 hours which is really 8 if you stop and ponder :) Course Structure The course is divided into eight modules.  Each module explores a different factor that needs to be considered as part of the migration process. Module 1:  Introduction:  This section provides an introduction to the training course, highlighting the values of the Windows Azure Platform for developers. Module 2:  Dynamic Environment: This section goes into detail about the dynamic environment of the Windows Azure Platform. This session will explain the difference between current development states and the Windows Azure Platform environment, detail the functions of roles, and highlight development considerations to be aware of when working with the Windows Azure Platform. Module 3:  Local State: This session details the local state of the Windows Azure Platform. This section details the different types of storage within the Windows Azure Platform (Blobs, Tables, Queues, and SQL Azure). The training will provide technical guidance on local storage usage, how to write to blobs, how to effectively use table storage, and other authorization methods. Module 4:  Latency and Timeouts: This session goes into detail explaining the considerations surrounding latency, timeouts and how to assess an IT portfolio. Module 5:  Transactions and Bandwidth: This session details the performance metrics surrounding transactions and bandwidth in the Windows Azure Platform environment. This session will detail the transactions and bandwidth costs involved with the Windows Azure Platform and mitigation techniques that can be used to properly manage those costs. Module 6:  Authentication and Authorization: This session details authentication and authorization protocols within the Windows Azure Platform. This session will detail information around web methods of authorization, web identification, Access Control Benefits, and a walkthrough of the Windows Identify Foundation. Module 7:  Data Sensitivity: This session details data considerations that users and developers will experience when placing data into the cloud. This section of the training highlights these concerns, and details the strategies that developers can take to increase the security of their data in the cloud. Module 8:  Summary Provides an overall review of the course.

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • OTN Virtual Technology Summit - July 9 - Middleware Track

    - by OTN ArchBeat
    The Architecture of Analytics: Big Time Big Data and Business Intelligence This four-session track, part of the free OTN Virtual Technology Summit on July 9, will present a solution architect's perspective on how business intelligence products in Oracle's Fusion Middleware family and beyond fit into an effective big data architecture, offering insight and expertise from Oracle ACE Directors and product team experts specializing in business Intelligence to help you meet your big data business intelligence challenges. Register now! Sessions Oracle Big Data Appliance Case Study: Using Big Data to Analyze Cancer-Genome Relationships Tom Plunkett, Lead Author of the Oracle Big Data Handbook What does it take to build an award winning Big Data solution? This presentation takes a deep technical dive into the use of the Oracle Big Data Appliance in a project for the National Cancer Institute's Frederick National Laboratory for Cancer Research. The Frederick National Laboratory and the Oracle team won several awards for analyzing relationships between genomes and cancer subtypes with big data, including the 2012 Government Big Data Solutions Award, the 2013 Excellence.Gov Finalist for Innovation, and the 2013 ComputerWorld Honors Laureate for Innovation. [30 mins] Getting Value from Big Data Variety Richard Tomlinson, Director, Product Management, Oracle Big data variety implies big data complexity. Performing analytics on diverse data typically involves mashing up structured, semi-structured and unstructured content. So how can we do this effectively to get real value? How do we relate diverse content so we can start to analyze it? This session looks at how we approach this tricky problem using Endeca Information Discovery. [30 mins] How To Leverage Your Investment In Oracle Business Intelligence Enterprise Edition Within a Big Data Architecture Oracle ACE Director Kevin McGinley More and more organizations are realizing the value Big Data technologies contribute to the return on investment in Analytics. But as an increasing variety of data types reside in different data stores, organizations are finding that a unified Analytics layer can help bridge the divide in modern data architectures. This session will examine how you can enable Oracle Business Intelligence Enterprise Edition (OBIEE) to play a role in a unified Analytics layer and the benefits and use cases for doing so. [30 mins] Oracle Data Integrator 12c As Your Big Data Data Integration Hub Oracle ACE Director Mark Rittman Oracle Data Integrator 12c (ODI12c), as well as being able to integrate and transform data from application and database data sources, also has the ability to load, transform and orchestrate data loads to and from Big Data sources. In this session, we'll look at ODI12c's ability to load data from Hadoop, Hive, NoSQL and file sources, transform that data using Hive and MapReduce processing across the Hadoop cluster, and then bulk-load that data into an Oracle Data Warehouse using Oracle Big Data Connectors. We will also look at how ODI12c enables ETL-offloading to a Hadoop cluster, with some tips and techniques on real-time capture into a Hadoop data reservoir and techniques and limitations when performing ETL on big data sources. [90 mins] Register now!

    Read the article

  • The Minimalist's Approach to Content Governance

    - by Kellsey Ruppel
    This week on the blog, we want to focus on the content lifecylce and how important it is to have the tools in place to be able to properly manage all te phases of the content lifecylce. John Brunswick has some great advice when it comes to this topic, so expect to hear a lot from him this week! Originally posted by John Brunswick. Let's be honest - content governance is far from an exciting topic. BUT the potential of a very small intranet team creating and maintaining a platform that provides an organization with relevant, high value information, helping workers to get their jobs done with greater accuracy and in less time is exciting. It is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year.   What can be done to bridge this gap? Over the next few blog entries let's take a pragmatic, minimalistic view of a process that can help any team manage a wealth of unstructured information. Based on an earlier article that I wrote around Portal Governance, I am going to focus on using technology as much as possible to support the governance of content with minimal involvement from users. The only certainty about content production is that business users are not fans of maintaining content. Maintenance is overhead and is a long-term investment thats value will possibly not be realized under the current content creator's watch. To add context to how we will use technical tools in this process, each post will highlight one section of the content lifecycle process as outlined below Content Lifecycle Stages 1. Request - Understand the education, purpose, resource and success criteria for content 2. Create - Determine access and workflow for content 3. Manage - Understand ownership and review cycles 4. Retire - Act on thresholds established during the request stage Within each state we will also elaborate as to 1. Why - why would we entertain doing this? 2. How - the steps that are needed to make it happen 3. Impact - what is the net benefit or loss based on the process Over the course of this week, we will dive deep into the stages and the minimal amount of time, effort and process within each to make some meaningful gains in the improvement of user experience and productivity in their search for information. It might be a stretch to say that we can make content governance exciting, but hopefully it can end up being painless and paying dividends. And if you'd like to hear first hand from a customer that is managing their content lifecycle with Oracle WebCenter, be sure to join us on Wednesday for this webcast "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter"!

    Read the article

  • I didn't mean to become a database developer, but now I am. Should I stop or try to get better?

    - by pretlow majette
    20 years ago I couldn't afford even a cheap POS program when I opened my first surf shop in the Virgin Islands. I bought a copy of Paradox (remember that?) in 1990 and spent months in a back room scratching out a POS application. Over many iterations, including a switch to Access (2000)/SQL Server (2003), I built a POS and backoffice solution that runs four stores with multiple cash registers, a warehouse and office. Until recently, all my stores were connected to the same LAN (in a small shopping center) and performance wasn't an issue. Now that we've opened a location in the States that's changed. Connecting to my local server via the internet has slowed that locations application to a crawl. This is partly due to the slow and crappy dsl service we have in the Virgin Islands, and partly due to my less-than-professional code and sql. With other far-away stores in the works, I need a better solution. I like my application. My staff knows it well, and I'm not inclined to take on the expense of a proper commercial solution. So where does that leave me? I should probably host my sql online to sidestep the slow dsl here. I think I can handle cleaning up my SQL querries to speed that up a bit. What about Access? My version seems so old, but I don't like the newer versions with the 'ribbon'. There are so many options... Should I be learning Visual Studio with an eye on moving completely to the web? Will my VBA skills help me at all there? I don't have the luxury of a year at the keyboard to figure it out anymore. What about dotnetnuke, sharepoint, or lightswitch? They all seem like possibilities, but even understanding their capabilities is daunting. I'm pretty deep into it, but maybe I should bail and hire a consultant or programmer. That sounds expensive tho, and there's no guarantee there either... Any advice would be greatly appreciated. Or, if anybody is interested in buying a small chain of surf shops...

    Read the article

  • Are You Meeting Social Customer Service Expectations?

    - by Mike Stiles
    Whether it’s B2B or B2C, one sure path to repeat business is making sure your buyer has a memorably pleasant and successful customer service experience with you. If they get that kind of treatment consistently, that’s called a relationship. And those aren’t broken easily. Social customer service, driven by integrated SRM (social relationship management) technology, is the venue that can effectively connect customers not only to the brand, but to other customers. Positive experiences, once administered, don’t just rest with the recipient. They’re published in the form of public raves and peer-to-peer recommendation, a force far more actionable than push advertising. What’s more, your customers have come to expect access to you and satisfaction from you using social. An NM Incite study shows 83% of Twitter users and 71% of Facebook users expect to get an answer from brands the same day they post to them on their social assets. To make sure you’re responding, you’ve got to have a tech platform that’s set up to moderate and alert so you’ll know ASAP a customer needs help. The more integrated your social enterprise is, the faster you can not only respond, but respond with the answer they’re looking for, because your system is connected to the internal resources that can surface the answer or put wheels in motion to rectify the situation in the shortest amount of time possible. But if you go to the necessary lengths to make sure your customers feel valued and important, will they really reward you? The study says 71% of consumers who got quick and effective responses from companies they contacted via social were more likely to recommend the brand to their friends and followers. So yes, sweeping people off their feet pays big dividends in terms of word-of-mouth marketing. But you should be keenly aware of the reverse side of that coin. Give people a negative experience, either in real world or virtual customer service, and that message is highly likely to get amplified through social channels faster and louder. Only 36% of the NM Incite study’s respondents reported that their problems were solved quickly and effectively. 36%? That’s hardly an impressive number. It gets worse. 10% never got so much as a response - at all. Going back to the relationship analogy, companies that are this deep in the ditch where customer service is concerned are making their girl or boyfriends really easy for a competitor to steal. Given the technology tools and data available right now for having an intimate knowledge of the customer, what products they’ve purchased, likely problems with those products, effective resolutions to those problems, and follow-up communication to gauge satisfaction, there are fewer excuses than ever for making the lifeblood of your business feel like you couldn’t care less. @mikestiles

    Read the article

  • Response: Agile's Second Chasm

    - by Malcolm Anderson
    William Pietri over at Agile Focus has written an interesting article entitled, "Agile’s Second Chasm (and how we fell in)" in which he talks about how agile development has fallen into a common trap where large companies are now spending a lot of money hiring agile (Scrum) consultants just so that they can say they are agile, but all the while avoiding any change that is required by Scrum.   It echoes the questions that I've been asking for a while, "Can a fortune 500 company actually do agile development?"  I'm starting to think that the answer is "usually not"   William ask 3 questions at the end of his article that I will answer here.   1) Have I seen agile development brought in and then preemptively customized (read: made into ScrummerFall)?   Yes, Scrum is hard and disruptive.  It's a spotlight on company dysfunction.  In a low trust environment like most fortune 500 companies Scrum will be subverted by anyone who has ever seen "transparency" translate into someone being laid off.   2) If I had to do it all over again, would I change anything?  No, this is a natural progression, but the agile principles are powerful enough, that the companies that don't adopt them will no longer be competitive and will start to fail.   3) Is this situation solvable?  I think it is.  I think that one of the issues is that you often see companies implementing Scrum, but avoiding the agile engineering practices.  I believe that you cannot do one without the other.  Scrum keeps the ship sailing in smooth deep waters.  The agile engineering practices keep the engine running smoothly and cleanly.  If you implement agile engineering practices without Scrum, you run the risk of ending up with a great running piece of software that is useful to no one.  On the other hand, implementing cargo-cult Scrum without the agile engineering practices and you end up (especially in a fortune 500 company) being steered in the right direction, but with your development practices coming to a dead halt because you have code that can not keep up with the changes in requirements.   If you are trying to do Scrum, make sure that you hire some agile engineering coaches, or else you may find your deveolpment engines grinding to a dead halt in the middle of the open ocean.

    Read the article

  • Rolling With the Punches

    - by D'Arcy Lussier
    So I’ve been tweeting the last little while “Rolling with the punches” and I’ve had some people ask me what that meant. Whether you’re running a conference (like I am this week), or a project, or a birthday party for a 2 year old, you need to be ready to handle those things that are unexpected. Risk mitigation can only go so far and its at those times that you need to become resourceful. So let me tell you what the last few days have been like. Today is the first day of Prairie Dev Con Winnipeg, a conference that I run. On Friday I was informed that my keynote speaker had lost his voice, one of my speakers had a family emergency and had to back out, and I got a warning from another that he was travelling over the weekend and if there was a storm or something he may not be able to get back by Monday for his talk. A storm didn’t happen, but their car did break down and he was delayed. Finally, Saturday night I took my printing order to Staples. It was at 5 and they closed at 6, and I had a bunch of surveys to be printed and cut. The girl working said that she’d have it ready by the next day (Sunday). Her intent was to come in the next morning and finish the job. Unfortunately, she had to be hospitalized that night and never made it into work…and never informed anyone of the remaining work. They found out at 3pm when I came to pick it up and there was no way they’d be able to cut everything in time. So how did we roll with these punches? - Miguel, my keynote speaker, was a trooper and was able to do the keynote but asked that his session get moved from Monday to Tuesday. This is why I wait until the last day before printing out schedules, they can change up to the event and even later. - I was able to move some sessions around to accommodate my stranded speaker and fill the empty slot from the speaker that couldn’t make it. - Staples was able to get me half the cut surveys so I took those and my wife will pick up the rest today. I altered how we’d collect session surveys, and actually I think it’ll work better. So all of this is to say, plan but also plan for what you can’t plan for – there will be things that happen that blindside you, that you’re not sure how to handle or solve. Stop, take a deep breath, and don’t feel that you need to limit yourself to the boundaries that you initially set for yourself. Roll with the punch and learn from it so that you can avoid the blow next time. Now, back to the conference! D

    Read the article

  • How do I connect two computers with a LAN cabel?

    - by John
    I have two machines - Windows XP and a laptop using Windows 7. I connected them with a WLAN cable. On the Windows XP machine, I set the IP address to 192.168.0.10. On the Windows 7 laptop, I set the IP address to 192.168.0.20. The laptop can see the Windows XP machine, but Windows XP machine cannot see the Windows 7 machine. But this does NOT concern me. I want to move the files from my desktop (Windows XP) to Windows 7 (laptop). That's why I'm going through all this. The problem is that when I try to connect from Windows 7 to Windows XP machine, I get this window: I don't understand what username/password is needed. I use none on the Windows XP machine. I tried all usernames - no success. Please explain in deep details how to solve my problem so I can connect to my Windows XP machine. EDIT: Maybe this can help: the Windows XP machine is named 'I' and '???????? III' is the name of the laptop. Both computers share one workgroup - WORKGROUP.

    Read the article

  • Add Windows 7’s AeroSnap Feature to Vista and XP

    - by Asian Angel
    Are you using Windows Vista or XP and want that Windows 7 AeroSnap goodness on your own system? Then join us as we look at AeroSnap for Windows Vista and XP. Note: Requires .NET Framework 2.0 or higher (link provided at bottom of article). Setup What exactly does AeroSnap do you might ask…here is a quote directly from the website: “AeroSnap is a simple but powerful application that allows you to resize, arrange or maximize your desktop windows with just drag’n'drop. Simply drag a window to a side of your desktop to snap it or drag it to the top to maximize. When you drag it back to the last position, the last window size will be restored.” As soon as you have finished installing AeroSnap and started it for the first time the only item that will be visible is the “System Tray Icon”. Before going any further you should take a moment to view and make any desired adjustments in the “Options”. Note: AeroSnap works with multiple monitors. You may want to have AeroSnap start with Windows each time but the really nice setting to enable here is the “Snap Preview”. If you are using AeroSnap on Vista and have Aero enabled this will really be nice. The second portion may be of interest for those who would like to enable the keyboard shortcut function. One point worth noting about this screen is that the highest number of pixels from the screen’s edge that you can set AeroSnap for is 20 pixels. AeroSnap in Action AeroSnap is extremely easy to use…just grab the top of an app window and drag it to the left, right, or top of your screen. Since we installed this on Windows Vista we made certain to enable the “Snap Preview” in the “Options”.  We started off with dragging our Firefox 3.7 window towards the left…once we got close to the edge of the screen you can see that the left half of the screen temporarily “shaded over”. Note: The “Snap Preview” displays on the left and right movements but not the top movement. Releasing Firefox snapped it right into the “shaded over” part of the screen. The great thing about AeroSnap is that it is really easy to return the app window to it former size…all that you have to do is simply click on and grab the top portion of the app window. Moving Firefox towards the top of our screen and… It quickly snaps into filling the screen. One thing that we did notice is that the window did not “Maximize” as per the function for the button in the upper right corner. Dragging towards the right side now… And snap! Tucked in all nice and neat… You can minimize the app windows to the Taskbar and they will return to their previous “snap area” when “maximized” again. Conclusion If you have been wanting to add Windows 7’s AeroSnap goodness to your Vista and XP systems then you should definitely give this app a try. AeroSnap is very easy to set up and operate… Links Download AeroSnap for Windows Vista & XP Download the .NET Framework Similar Articles Productive Geek Tips Using Windows 7 or Vista System RestoreRoundup: 16 Tweaks to Windows Vista Look & FeelSelect Files using Check Boxes in Windows VistaSpeed up Your Windows Vista Computer with ReadyBoostHow-To Geek Bounty: $103.24(Paid!) for Active Desktop for Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Add a Custom Title in IE using Spybot or Spyware Blaster When You Need to Hail a Taxi in NYC Live Map of Marine Traffic NoSquint Remembers Site Specific Zoom Levels (Firefox) New Firefox release 3.6.3 fixes 1 Critical bug Dark Side of the Moon (8-bit)

    Read the article

  • Silverlight Cream for March 28, 2010 -- #823

    - by Dave Campbell
    In this Issue: Michael Washington, Andy Beaulieu, Bill Reiss, jocelyn, Shawn Wildermuth, Cameron Albert, Shawn Oster, Alex Yakhnin, ondrejsv, Giorgetti Alessandro, Jeff Handley, SilverLaw, deepm, and Kyle McClellan. Shoutouts: If I've listed this before, it's worth another... Introduction to Prototyping with SketchFlow (twelve video series) and on the same page is Creating a Beehive Game with Behaviors in Blend 3 (ten video series) Shawn Oster announced his Slides + Code + Video from ‘An Introduction to Developing Applications for Microsoft Silverlight’ from MIX10 Tim Heuer announced earlier this week: Silverlight Client for Facebook updated for Silverlight 4 RC Nikhil Kothari announced the availability of his MIX10 Talk - Slides and Code András Velvárt backed up his great MIX09 effort with MIX10.Zoomery.com... everything in one DZ effort... thanks András! Andy Beaulieu posted his material for his Code Camp 13 in Waltham: Windows Phone: Silverlight for Casual Games From SilverlightCream.com: Silverlight MVVM - The Revolution Has Begun Michael Washington did an awesome tutorial on MVVM and Silverlight creating a simple Silverlight File Manager. The post has a link to the tutorial at CodeProject... great tutorial. Windows Phone 7 + Silverlight Performance Andy Beaulieu has a post up we should all bookmark... getting a handle on the graphics performance of our app on WP7. Great examples, and external links. Space Rocks game step 6: Keyboard handling Bill Reiss has a post up about keyboard input for the WP7 game he's building ... this is Episode 6 ... you're working along with him, right? Panoramic Navigation on Windows Phone 7 with No Code! jocelyn at InnovativeSingapore (I found this by way of Shawn's post), has a Panoramic Navigation template out there for WP7 for all of us to grab... great post about it too. My First WP7 Application Shawn Wildermuth has been playing with WP7 development and has his XBOX Game library app up on the emulator... all with source of course Silverlight and Windows Phone 7 Game Cameron Albert built a web-based game called 'Shape Attack' and also did it for WP7 to compare the performance... check it out for yourself, but hey, it's game source for the phone... cool :) Changing the Onscreen Keyboard layout in Silverlight for Windows Phone using InputScope Shawn Oster has a cool post on changing the keyboard on WP7 to go along with what you're expecting the user to type... how cool is that?? Deep Zoom on WP7 Check out the quick work Alex Yakhnin made of putting DeepZoom on WP7... all source included. How to: Create a sketchy Siverlight GroupBox in Blend/SketchFlow ondrejsv has the xaml up to take Tim Greenfield's GroupBox control and insert it into SketchFlow. Silverlight / Castle Windsor – implementing a simple logging framework Giorgetti Alessandro posted about CastleWindsor for Silverlight, and a logging system inherited from LevelFilteredLogger in the absence of Log4Net. DomainDataSource in a ViewModel Jeff Handley responds to a common forum post about using DomainDataSource in a ViewModel. Read his comments on AutoLoad and ElementName Bindins. Digital Jugendstil TextEffect (Art Nouveau) - Silverlight 3 SilverLaw has a cool TagCloud demo and a UserControl he calls Art Nouveau up at the Expression Gallery... not for a business app, I don't think :) Configuring your DomainService for a Windows Phone 7 application deepm discusses RIA Services for WP7 and how to enable a WP7 app to communicate with a DomainService. Writing a Custom Filter or Parameter for DomainDataSource Kyle McClellan by way of Jeff Handley's blog, is discussing how to leverage the custom parameter types you defined in the previous version of RIA Services. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • MacBook Pro 13" Install DVD Wont Start

    - by Belliez
    Hi, Excuse such a basic question... I am a Laptop Fixer and deal with Windows based laptops only but very recently took in a 13" MacBook Pro for a re-install of the OS (easy I thought!) I inserted the Install DVD, held the C button and turn on. I could hear the disk spinning up and after about a min the DVD is ejected. There are a few scratches on the DVD but should be ok as not that deep. However, Windows Vista was installed (it failed to install properly hence the re-install of Mac OS). Should I wipe clean and format the hard disk first? Could this be the reason the DVD is ejected? Any advice would be gratefully accepted? p.s. never held a MacBook Pro before... first impressions, wow... alu casing and massive touch pad... and the magnetic power socket.... so impressed and it doesn't even work!

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • How do I connect two computers with a LAN cable?

    - by John
    I have two machines - Windows XP and a laptop using Windows 7. I connected them with a WLAN cable. On the Windows XP machine, I set the IP address to 192.168.0.10. On the Windows 7 laptop, I set the IP address to 192.168.0.20. The laptop can see the Windows XP machine, but Windows XP machine cannot see the Windows 7 machine. But this does NOT concern me. I want to move the files from my desktop (Windows XP) to Windows 7 (laptop). That's why I'm going through all this. The problem is that when I try to connect from Windows 7 to Windows XP machine, I get this window: I don't understand what username/password is needed. I use none on the Windows XP machine. I tried all usernames - no success. Please explain in deep details how to solve my problem so I can connect to my Windows XP machine. EDIT: Maybe this can help: the Windows XP machine is named 'I' and '???????? III' is the name of the laptop. Both computers share one workgroup - WORKGROUP.

    Read the article

  • Distributed development staff needing a common IP range

    - by bakasan
    I work on a development staff that is geographically distributed, mostly all throughout the state of CA, but several key members also must travel frequently. We rely quite heavily on a 3rd party provider API for a great deal of our subsystems (can't get into who it is or what they do). The 3rd party however is quite stringent on network access and have no notion of a development sandbox. Access is restricted to 2, 3 IP numbers and that's about it. Once we account for our production servers, that leaves us with an IP or two to spare for our dev team--which is still problematic as people's home IP changes, people travel, we have more than 2 devs, etc. Wide IP blocks are not permitted by the 3rd party. Nor will they allow dynamic DNS type services. There is no simple console to swap IPs on the fly either (e.g. if a dev's IP at home changes or they are on the road). As none of us are deep network experts, I'm wondering what our viable options are? Are there such things as 3rd party hosts to VPNs? Generally I think of a VPN as a mechanism to gain access to a home office, but the notion would be a 3rd party VPN that we'd all connect to and we'd register this as an IP origin w/ our 3rd party. We've considered using Amazon EC2 to effectively host a dev environment for each dev and using that to connect. Amazon only gives you so many static IPs however (I believe 5?) so this would only be a stop gap solution until our team size out strips our IP count at Amazon. Those were the only viable thoughts that I had, but again, I'm far from a networking guy. Tried searching for similar threads, but I'm not even sure I know the right vernacular to look around for.

    Read the article

  • Time drift in Cloud Server - need to mainpulate GRUB config

    - by Aditya Advani
    We are hosting a VPS on a popular host and are experiencing a regular time drift of several minutes a day forward (approx 7). Linux Kernel: 2.6.18-164.11.1.el5 GNU/Linux Distro: CentOS release 5.4 (Final) We reached out to our hosting provider and their support advised us " This is a known issue with Cloud Servers. To fix this you will need to add one line to your grub config located at: /boot/grub/menu.lst The line you need to add is: noapic nolapic divider=10 nolapic_timer This should correct this issue. You will need to restart after this is added in. " Because I am wary of manipulating grub, mostly I'm terrified that our server may fail to restart - I ask you guys, the pro *nix admins - where exactly in this file does the recommended insertion below: # line from 1&1 for time syncing issue (Case 5163) noapic nolapic divider=10 nolapic_timer go? Please specify where exactly, and whether the order of commands is or is not important. Why is the block below "title CentOS ..." indented? If someone could give me an overview of how this works or point me to a resource that's easy to follow, that's what I'm looking for immediately, a light overview or basic understanding of what I;m doing. If GRUB and bootloaders are a deep dark treasure trove of kernel hacking or something, that's great well-recommended in-depth resources are also very welcome. This is my current /boot/grub/menu.lst # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file #boot=/dev/sda # serial --unit=0 --speed=57600 terminal --timeout=5 serial console timeout=5 title CentOS (2.6.18-164.11.1.el5) root (hd0,0) kernel /boot/vmlinuz-2.6.18-164.11.1.el5 ro root=/dev/hda1 console=tty0 console=tty initrd /boot/initrd-2.6.18-164.11.1.el5.img MOST IMPORTANT: I need to know where in the file above it is appropriate to paste the suggested line so I can confidently restart my VPS after manipulating GRUB config

    Read the article

  • htaccess not properly rewriting urls

    - by Cameron Ball
    This is a bit of a weird one. I'm doing some work on a server, and I need rewrite rules for directories that actually exist (in some cases, they are more than one level deep) At the moment my .htaccess looks like this: RewriteEngine on RewriteRule ^simfiles/([-\ a-zA-Z0-9:/]+)$ http://mydomain.com/?portal=simfiles&folder=$1 [L] And this is working OK, for example, a url like: mydomain.com/sifmiles/my-files Will get redirected to mydomain.com/?portal=simfiles&folder=my-files Or in the case of a directory structure that is deeper than one level: mydomain.com/sifmiles/my-files/more-of-my-files Will get redirected to mydomain.com/?portal=simfiles&folder=my-files/more-of-my-files I wrote the regex so that it won't match things with a . in the path, because there are css and js files which reside in simfiles/somedirectory, and if I redirect everything then these cannot be loaded. I tried a configuration like this: RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^simfiles/([-\ a-zA-Z0-9:/\.]+)$ http://mydomain.com/?portal=simfiles&folder=$1 [L] But that doesn't work, things still don't load properly. So my first question is, how can I achieve this "properly"? I don't like my solution because it means redirects won't occur if the folder has a . in its name. My second problem, is that while the redirection is happening properly, the url becomes: http://mydomain.com/?portal=simfiles&folder=my-files I want the URL to remain clean, like: http://mydomain.com/sifmiles/my-files How can I achieve this?

    Read the article

  • what's the difference between a Volume and a Partition in Windows 7 diskpart

    - by user170232
    I was trying to follow the Intel guide for setting up iRST (Intel Rapid Start Technology) on my new laptop. The Intel manual says you need to create a *Volume that is as big or bigger than your available memory, set it to a specific id (id=84), then go into the iRST tool and adjust some settings. Looking at the disk manager on the laptop, I see there is already a Partition labeled as "Hibernation Partition" which is a little bigger than the memory in my system. So it looks like iRST was already set up...BUT, it's a Partition, not a Volume. Here's what the manual says to do: (from: http://download.intel.com/support/motherboards/desktop/sb/rapid_start_technology_user_guide.pdf) diskpart list disk select disk x (where x is the disk to use, there's only one disk in this laptop) create partition primary size=X000 (where X000 is the size to create) detail disk (which lists details for the disk. This is where i get hung up) select volume Z (where Z is the *partition you created previously) ** it says the 'detail disk' command will list the volume #, but it doesn't. ** 'detail disk' only lists two "volumes" for Recovery and OS. ** if i do 'list partition', i see the 8 GB *partition labeled as "Hibernation Partition") ** so I can't continue with the following steps: set id=84 override exit The reason I went looking for the manual is because when iRST is enabled in the BIOS, the system won't resume from sleep. When it's disabled, it works fine, but the system goes into (legacy?) Hibernation mode and takes a while to come out of Hibernation. the iRST is supposed to resume from deep sleep very quickly. So, what's the difference between a Volume and a Partition? Should I delete the Hibernation Partition and create a Hibernation Volume? Anyone have any ideas? (if it matters, this is on a Dell XPS 13 with BIOS A08) Thanks! J

    Read the article

  • What do I need in order to extract and combine text files from multiple ZIP files, via command line?

    - by Iszi
    I've got an interesting scripting challenge in front of me. I'm fairly certain there's a way to do it, but I feel like I'm probably lacking some particular tools and/or functional knowledge. There's some fifty-plus ZIP files that each contain, among other things, text files that need to be merged with one another. The structure is something like this: C:\Reports\FirstJob-1.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\FirstJob-2.zip |-MyName |-FirstJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt C:\Reports\SecondJob-1.zip |-MyName |-SecondJob |-1 |-[Some other folders] |-TXTReports |-English |-[Some other files] |-Report.txt If I had all the Report.txt files in one regular folder, and uniquely named, I could probably just write a FOR statement that targets *.txt and runs something like type filename.txt >> Consolidated.txt on each. However, these all have the same file name and are embedded deep within separate ZIP files. The potentially useful tools I currently have at my disposal are Windows XP Professional SP3, PowerShell, and WinZip. I'd rather not download or install anything else, but I do understand that third-party tools (or additional tools from Microsoft or WinZip) may be necessary. Whatever tools I use should run natively in Windows. I really don't want to have to mess with Cygwin or other emulators on this system. At the very least, I need a tool that will allow me to analyze and manipulate ZIP files from the command line. Also, are there any other particular complications to this that I've not yet thought of?

    Read the article

  • ProCurve network expansion

    - by Blue Warrior NFB
    I've hit a bit of a wall with our network scale-out. As it stands right now: We have five ProCurve 2910al switches connected as above, but with 10GbE connections (two CX4, two fiber). This fully populates the central switch above, there will be no more 10GbE Ethernet connections from that device. This group of switches is not stacked (no stack directive). Sometime in the next two or three months I'll need to add a sixth, and I'm not sure how deep of a hole I'm in. Ideally I'd replace the core switch with something more capable and has more 10GbE ports. However, that's a major outage and that requires special scheduling. The two edge switches connected via fiber have dual-port 10GbE cards in them, so I could physically put another switch on the far end of one of those. I don't know how much of a good or bad idea that would be though. Is that too many segments between end-points? Some config-excerpts: Running configuration: ; J9147A Configuration Editor; Created on release #W.14.49 hostname "REDACTED-SW01" time timezone 120 module 1 type J9147A module 2 type J9008A module 3 type J9149A no stack trunk B1 Trk3 Trunk trunk B2 Trk4 Trunk trunk A1 Trk11 Trunk trunk A2 Trk12 Trunk vlan 15 name "VM-MGMT" untagged Trk2,Trk5,Trk7 ip helper-address 10.1.10.4 ip address 10.1.11.1 255.255.255.0 tagged 37-40,Trk3-Trk4,Trk11-Trk12 jumbo ip proxy-arp exit

    Read the article

  • Permissions won't cascade more than 1 level

    - by Jovin_
    Running Windows Small Business Server 2011 I have a file structure with a lot of sub folders (sometimes 5-6 levels deep). I have created access groups to grant access to my users, and also deny groups to deny access to others. X Access & X Deny. These allow or deny access to a mapped network drive X: On the server I put in the groups with Full Control Allow for X Access and Full Control Deny for X Deny, I also tick the box "Apple these permissions to objects and/or containers within this container only" and have ensured that "Apply to:" is "This folder, subfolders and files". But for some reason the permissions will only apply to the next level of folders & files. ex. structure: X: Folder 1 Folder 1a Folder 2 Folder 2a If I apply the permissions to X: it'll only go to Folder 1 & 2, not 1a and 2a, I then need to manually apply the permissions to these too. Is this working as intended or am I doing something wrong?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >