Search Results

Search found 676 results on 28 pages for 'the deals dealer'.

Page 12/28 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • 50% off ASP.NET hosting

    - by Fabrice Marguerie
    I haven't blogged for a long time because I'm busy working on an exciting new project. It's too early to tell you more. I'll provide details in a few months. Meanwhile, I wanted to write a quick post to share an excellent offer with you. It's that time of the year when you can get deals on many things, including web hosting. I'd like to remind you about Arvixe, a great web hosting provider for Windows (for ASP.NET) and Linux. For 48 hours, between Thursday November 24th at 00:00 PST (08:00 GMT/UTC) and Friday November 25th, Arvixe will be offering 50% off all of their shared hosting products. This will be for all new accounts, for life (as long as you continue to renew the account)!I've been using Arvixe for my websites for more than one year and a half now, and I highly recommend them. Here is an overview of what I get for a very good price:Unlimited diskspaceUnlimited data transferUnlimited domainsUnlimited POP3 and IMAP mailboxesUnlimited SQL Server 2008 databasesUnlimited MySQL databases.NET 1.1, 2, 3.5 and 4Dedicated application poolsFull trustIIS 7Daily backupsand more... And now, you can get that too for half the price. Just go to Arvixe.com and secure your own hosting account now by using the coupon code "Black Friday" during checkout.Disclaimer: the links to Arvixe are affiliate links that may bring me some money home if you sign up. Still, I recommend Arvixe because I use them and I'm very happy with what they offer.

    Read the article

  • ACT On Marketing Campaign “Middleware Consolidation and Innovation Program”

    - by JuergenKress
     You want marketing budget to run joint Oracle Fusion Middleware 12 c events? Participate in the OFM ACTon Campaign. The opportunity for you as a partners is to : Create larger deals by reselling software and systems e.g. WebLogic on ODA, SOA on ODA, Exalogic for AppAdvantage Create more service revenue at our existing customer, by consolidation and migration of application servers platforms. Extend and innovate platforms e.g. mobile integration big data or business process automation Create service business at new customers, more than 120.000 customers use middleware today! The objective of the initiative is to run joint events for our middleware customers and Generate re-sell middleware license revenue in the broad market Generate Service revenue for partners Prepare partners to understand upgrade and upsell opportunities to Oracle Fusion Middleware 12c WebLogic Community Workspace (WebLogic Community membership required) you can learn details about the campaign: OFM ACTon event Brief & Middleware Consolidation and Innovation_Act-On Program_Salesplay & Campaign kit DRAFT. Interested and want to participate? Contact your local Value Added Distributer and he will work with you on a joint campaign plan! WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: marketing,ACton,Campaign,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • South Florida Stony Brook Alumni &amp; Friends Reception 2011

    - by Sam Abraham
    It’s official, we are kicking off a local South Florida Chapter for Stony Brook alumni and friends in the area to keep in touch.  Our first networking event will be taking place at Champps, Ft Lauderdale on November 17th, 6:00-8:00 PM. Admission is free and open for everyone, whether or not they are Stony Brook Alums. The team at Champps is offering us great specials (Happy hour deals, half-price appetizers,etc.) that we can choose to enjoy while we network and catch up. (Event Announcement: http://alumniandfriends.stonybrook.edu/page.aspx?pid=299&cid=1&ceid=171&cerid=0&cdt=11%2f17%2f2011) I look forward to share and revive my college experience which I believe was the starting line of my ongoing life journey. It would be also great to hear others’ take as they reflect on their experiences throughout their college years. I invite anyone interested in keeping in touch with friends and alums of Stony Brook to join our LinkedIn or Facebook groups.   The Stony Brook Alumni Association – South Florida Chapter LinkedIn Group: http://www.linkedin.com/groups?gid=3665306&trk=myg_ugrp_ovr The Stony Brook Alumni Association – South Florida Chapter Facebook Group: http://www.facebook.com/#!/groups/114760941910314/

    Read the article

  • Personalized Pricing

    - by David Dorf
    In past postings I've spent a fair amount of time talking about targeted promotions.  Using a complete view of the customer that includes purchase history, location history, and psychographics gleaned from social media, we can select the offer with the greatest chance of redemption.  This is done to influence shopping behavior, which might be introducing the consumer to a new product line, increasing their basket size, increasing frequency of purchases, etc. Safeway seems to be taking a slightly different approach with their personalized pricing.  In additional to offering electronic coupons and club card offers, they are also providing a personalized price for certain items based on purchase history.  So when Sally want to shop at Safeway, she first checks the "Just for U" website for three types of deals.  She starts by selecting manufacturer coupons to load into her loyalty card, then she checks the Club Card for offers like "buy one get one free." The third step is the interesting one.  Safeway will set a particular lower price for Sally good for 90 days on items she buys often.  Clearly this isn't enforcing a new behavior but rather instilling loyalty.  I would love to know exactly how they are determining the personalized price.  Of course bargain hunters can still stack the three offers so they can, for example, get their $4.99 Oatmeal for $0.72. I like this particular question and answer from their website's FAQ: My offers are not that great. Can I tell you what offers I need? That's a good idea. That functionality is not currently available, but we appreciate your input and are constantly improving our just for U program. Stay tuned for exciting enhancements! I suppose if Safeway is tracking all the purchases, they can easily determine whether the customer if profitable.  As long as the customer stays profitable, why not let them determine a few offers themselves?  Food for thought.

    Read the article

  • Information Spilling Across Object Boundaries

    - by Winston Ewert
    Many times my business objects tend to have situations where information needs to cross object boundaries too often. When doing OO, we want information to be in one object and as much as possible all code dealing with that information should be in that object. However, business rules do not follow this principle giving me trouble. As an example suppose that we have an Order which has a number of OrderItems which refers to an InventoryItem which has a price. I invoke Order.GetTotal() which sums the result of OrderItem.GetPrice() which multiples a quantity by InventoryItem.GetPrice(). So far so good. But then we find out that some items are sold with a two for one deal. We can handle this by having OrderItem.GetPrice() do something like InventoryItem.GetPrice( quantity ) and letting InventoryItem deal with this. However, then we find out that the two-for-one deal only lasts for a particular time period. This time period needs to be based on the date of the order. Now we change OrderItem.GetPrice() to be InventoryItem.GetPrice( quatity, order.GetDate() ) But then we need to support different prices depending on how long the customer has been in the system: InventoryItem.GetPrice( quantity, order.GetDate(), order.GetCustomer() ) But then it turns out that the two-for-one deals apply not just to buying multiple of the same inventory item but multiple for any item in a InventoryCategory. At this point we throw up our hands and just give the InventoryItem the order item and allow it to travel over the object reference graph via accessors to get the information its needs: InventoryItem.GetPrice( this ) TL;DR I want to have coupling in objects, but business rules often force me to access information from all over the place in order to make particular decisions. Are there good techniques for dealing with this? Do others find the same problem?

    Read the article

  • Auto switching between wired and wireless connections

    - by Joe
    How about this situation. Our business deals a lot with medical information. And some of our clients have demands based off HIPPA, etc. There is one now where they do not want an employee to have both wired and wireless on at same time. If the wireless is on the wired needs to be turned off automatically and vice versa. However, this can not be up to the end user to manage! I have looked for third party applications and only have found http://www.wirelessautoswitch.com Does anyone know of anything else that is out there? Or possible something that can be done via group policy, etc.?

    Read the article

  • Auto switching between wired and wireless connections

    - by Joe
    How about this situation. Our business deals a lot with medical information. And some of our clients have demands based off HIPPA, etc. There is one now where they do not want an employee to have both wired and wireless on at same time. If the wireless is on the wired needs to be turned off automatically and vice versa. However, this can not be up to the end user to manage! I have looked for third party applications and only have found http://www.wirelessautoswitch.com Does anyone know of anything else that is out there? Or possible something that can be done via group policy, etc.?

    Read the article

  • My KDE very slow in certain operations

    - by Pietro
    I have a problem with my Linux installation. It seems that the KDE code that deals with directory windows is extremely slow (on both Dolphin and Konqueror). This happens both when I click on a directory icon and when I want to open/save a file from many KDE applications. The time the window takes to open can be one minute or more. The same happens when I right click on an icon. Looking at the CPU usage, this is very low (less than 10%). Am I the only one with this problem, or is it well known and maybe already fixed? Consider that I cannot update to a more recent version of OpenSuse. Thank you, Pietro Configuration: Linux version: OpenSuse 11.4 KDE 4.6.0 System: DELL Precision T3500 - Intel Xeon Home directory mounted on a remote drive. <-- could this be the reason?

    Read the article

  • Reboot loop after Windows XP Service Pack 3 update

    - by espais
    Recently I upgraded to Service Pack 3, and now it seems that something has gone terribly wrong with the update. After logging in, my computer will blue screen after about 5 minutes and then go into a reboot loop (I don't have the exact error message handy). I have a Sager NP2092 notebook, running an Intel chipset. I'd rather avoid having to reformat my XP, especially with my copy of Windows 7 arriving right around the corner. After doing some Googling, I came across this article: Does your AMD-based computer boot after installing XP SP3? However, it deals with the AMD chips, and specifically states not to use its fix on Intel based systems. EDIT After killing the reboot, this is the error that pops up: STOP: 0x000000F4 (0x00000003, 0x8A187118, 0x8A18728C, 0x80604438) EDIT2 I have run Memtest86, and it reported 0 errors.

    Read the article

  • Proper updating of GeoClipMaps

    - by thr
    I have been working on an implementation of gpu-based geo clip maps, but there is a section of the GPU Gems 2 article that i just can't seem to understand, specifically this paragraph and more precisely the bolded part: The choice of grid size n = 2k-1 has the further advantage that the finer level is never exactly centered with respect to its parent next-coarser level. In other words, it is always offset by 1 grid unit either left or right, as well as either top or bottom (see Figure 2-4), depending on the position of the viewpoint. In fact, it is necessary to allow a finer level to shift while its next-coarser level stays fixed, and therefore the finer level must sometimes be off-center with respect to the next-coarser level. An alternative choice of grid size, such as n = 2k-3, would provide the possibility for exact centering Let's take an example image from the article: My "understanding" of the way the clip maps were update was that you floor the position of the viewpoint to an int, and such get the center vertex point if this is not the same as the previous center point, you update the entire map. Now, this obviously is not the case - but what I am failing to understand is this: If you look at the image above, if the viewpoint was to move one unit to the right, then the inner ring (the one just around the view point + white center square) would end up getting a 1 unit space on both the left and right side of itself. But there is nothing in the paper that deals with this, what i mean is that it would end up looking like this (excuse my crummy cut-and-paste editing of the above image): This is obviously not a valid state of the. So, would the solution be that a clip ring (layer) can only move in increments of the ring/layer it's contained within? Wouldn't this end up being very restrictive? I feel like I am missing some crucial understanding of parts of the algorithm, but I have been over both this paper and the original paper from 2004 and I just can't see what I am not getting.

    Read the article

  • As my first professional position should I take it at a start-up or a better known company? [closed]

    - by Carl Carlson
    I am a couple of months removed from graduating with a CS degree and my gpa wasn't very high. But I do have aspirations of becoming a good software developer. Nevertheless I got two job offers recently. One is with a small start-up and the other is with a military contractor. The military contractor asked for my gpa and I gave it to them. The military contracting position is in developing GIS related applications which I was familiar with in an internship. After receiving an offer from the military contractor, I received an offer from the start-up after the start-up asked me how much the offer was from the military contractor. So the pay is even. The start-up would require I be immediately thrust into it with only two other people in the start-up currently and I would have to learn everything on my own. The military contractor has teams and people who know what their doing and would be able to offer me guidance. Seeing as how I have been a couple of months removed from school and need something of a refresher is it better than I just dive into the start-up and diversify what I've learned or be specialized on a particular track? Some more facts about the start-up: It deals with military contracts as well and is in Phase 2 of contracts. It will require I learn a diverse amount of technologies including cyber security, android development, python, javascript, etc. The military contractor will have me learn more C#, refine my Java, do javascript, and GIS related technologies. I might as well come out and say the military contractor is Northrop Grumman and more or less offered me less money than the projected starting salary from online salary calculators. But there is the possibility of bonuses, while the start-up doesn't include the possibility of bonuses. I think benefits for both are relatively the same.

    Read the article

  • How should I structure my database to gain maximum efficiently in this scenario?

    - by Bob Jansen
    I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible.

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Very large log files, what should I do?

    - by Masroor
    (This question deals with a similar issue, but it talks about a rotated log file.) Today I got a system message regarding very low /var space. As usual I executed the commands in the line of sudo apt-get clean which improved the scenario only slightly. Then I deleted the rotated log files which again provided very little improvement. Upon examination I find that some log files in the /var/log has grown up to be very huge ones. To be specific, ls -lSh /var/log gives, total 28G -rw-r----- 1 syslog adm 14G Aug 23 21:56 kern.log -rw-r----- 1 syslog adm 14G Aug 23 21:56 syslog -rw-rw-r-- 1 root utmp 390K Aug 23 21:47 wtmp -rw-r--r-- 1 root root 287K Aug 23 21:42 dpkg.log -rw-rw-r-- 1 root utmp 287K Aug 23 20:43 lastlog As we can see, the first two are the offending ones. I am mildly surprised why such large files have not been rotated. So, what should I do? Simply delete these files and then reboot? Or go for some more prudent steps? I am using Ubuntu 14.04.

    Read the article

  • JavaScript 3D space ship rotation

    - by user36202
    I am working with a fairly low-level JavaScript 3D API (not Three.js) which uses euler angles for rotation. In most cases, euler angles work quite well for doing things like aligning buildings, operating a hovercraft, or looking around in the first-person. However, in space there is no up or down. I want to control the ship's roll, pitch, and yaw. To do that, some people would use a local coordinate system or a permenant matrix or quaternion or whatever to represent the ship's angle. However, since I am not most people and am using a library that deals exclusively in euler angles, I will be using relative angles to represent how to rotate the ship in space and getting the resulting non-relative euler angles. For you math nerds, that means I need to convert 3 euler angles (with Y being the vertical axis, X representing the pitch, and Z representing a roll which is unaffected by the other angles, left-handed system) into a 3x3 orientation matrix, do something fancy with the matrix, and convert it back into the 3 euler angles. Euler to matrix to euler. Somebody has posted something similar to this on SO (http://stackoverflow.com/questions/1217775/rotating-a-spaceship-model-for-a-space-simulator-game) but he ended up just working with a matrix. This will not do for me. Also, I am using JavaScript, not C++. What I want essentially are the functions do_roll, do_pitch, and do_yaw which only take in and put out euler angles. Many thanks.

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • Organizing files relationally in Windows 7?

    - by Cayetano Gonçalves
    I just took a new job as a policy analyst, and after even one week keeping track of hundreds of files- lawsuits, legislation, letters, etc- in Windows 7 is proving difficult. In my last job I was a database architect and I helped build Linux based servers to track files among an entire department, however there is no way for me to do that at this time in this job. Is there any way to track files/indices/locations/tags/themes and store them in some kind of RDBMS system, instead of storing the files in folders that only allow for flat and fixed storage? For example, if I have a file that deals with: ELID organization Appeals court John Smith It really is inconvenient to have to decide which one of these tags to create into a folder and place the file into it, when it falls under all the categories. Even if I could place tags the way you can in Stack Exchange on files, it would solve a lot of heart ache.

    Read the article

  • How do I get away from PHP in the web industry?

    - by Kurtis
    I'm just looking for some tips on getting away from using PHP for web development. I'm self-employed but it seems like all of the work I find deals with PHP. I'm not complaining about the work -- just the poor choice of a language that is incredibly popular. I'd love to do my web development in Python, Perl, C#, or even a fun and fancy functional language. There's the old saying that you don't tell a carpenter what kind of a hammer to use. At the same time, you do tell them what kind of material to build your house out of and how much you're willing to spend. The problem I am running in to is that I don't know how to get out of this spiral. I can't just turn down work because then I wouldn't have any. I really don't want to go work for another company -- and even if I did, I'd probably still be stuck using something I don't enjoy. I'm hoping someone has "been there" before and might have some good ideas on how to get out of this situation. Thanks!

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

  • Accessing Secure Web Services from ADF Mobile

    - by Shay Shmeltzer
    Most of the enterprise Web services you'll access are going to be secured - meaning they'll require you to pass a user/password in order to get to their data.  If you never created a secured Web service, it's simple in JDeveloper! For the below video I just right clicked on a Java class that I exposed as a Web service, and chose  "Web Service Properties" and then checked the "oracle/wss_username_token_service_policy" box from the list of options (that's the option supported by ADF Mobile right now): In the demo below we are going to use a "remote" login server that does the authentication of the user/pass.The easiest way to "create" a remote login server is to create a "regular" web ADF application, secure it, and deploy it on a server. The secured ADF application can just require ADF Authentication with a simple HTTP Basic Authentication - basically the next two images in the Application->Secure->Configure ADF Security menu wizard. ok - so now you have a secured ADF application - deploy it on a server and get the URL for that application.  From this point on you'll see the process in the video which deals with the configuration of your ADF Mobile app. First you'll need to enable security for your ADF mobile application, so it will prompt users to provide a user/pass combination. You'll also need to configure security on specific features. And you can have them use remote login pointing to your regular secured ADF application. Next define your Web service data control. Right click on the web service data control to "define Web Service Security". You'll also need to define the adfCredentialStoreKey property for the Web Service data control in the connections.xml file. This should be it. Here is the flow: If you haven't already - you can read more about this in the Mobile developer guide, and Andrejus has a sample for you.

    Read the article

  • How does datomic handle "corrections"?

    - by blueberryfields
    tl;dr Rich Hickey describes datomic as a system which implicitly deals with timestamps associated with data storage from my experience, data is often imperfectly stored in systems, and on many occasions needs to retroactively be corrected (ie, often the question of "was a True on Tuesday at 12:00pm?" will have an incorrect answer stored in the database) This seems like a spot where the abstractions behind datomic might break - do they? If they don't, how does the system handle such corrections? Rich Hickey, in several of his talks, justifies the creation of datomic, and explains its benefits. His work, if I understand correctly, is motivated by core the insight that humans, when speaking about data and facts, implicitly associate some of the related context into their work(a date-time). By pushing the work required to manage the implicit date-time component of context into the database, he's created a system which is both much easier to understand, and much easier to program. This turns out to be relevant to most database programmers in practice - his work saves everyone a lot of time managing complex, hard to produce/debug/fix, time queries. However, especially in large databases, data is often damaged/incorrect (maybe it was not input correctly, maybe it eroded over time, etc...). While most database updates are insertions of new facts, and should indeed be treated that way, a non-trivial subset of the work required to manage time-queries has to do with retroactive updates. I have yet to see any documentation which explains how such corrections, or retroactive updates, are handled by datomic; from my experience, they are a non-trivial (and incredibly difficult to deal with) subset of time-related data manipulation that database programmers are faced with. Does datomic gracefully handle such updates? If so, how?

    Read the article

  • Windows 7 file sharing password protecting or making stuff available to just me

    - by Carbonara
    Even with the new Homegroup feature I'm still finding the way Windows deals with folder sharing utterly baffling. Here's what I want to do. I have two computers, a PC Desktop and a laptop. I also live in a shared flat with other computer users. I have set up a Homegroup and a Workgroup on the desktop and joined them on the laptop and in the home group I have shared video, music and pictures. This is so that anyone on the network can view pictures and listen to music etc. But I want my Documents folder from my desktop to only be available to me on my laptop and not to anyone else that may be on the network. The Homegroup only allows (from what I can gather from the baffling array of options) sharing with everyone or no one. Is it possible to only allow the laptop to access the documents folder on the desktop? The user name and password are the same on both computers.

    Read the article

  • What are the most necessary non-language specific things a programmer needs to know?

    - by Josh
    I've been thinking about this a lot lately. Right now I work at a little web company, am almost done with school, and have written an iPhone app, but I'm not sure what else I need to focus my learning energies on. I've decided I want to do software programming, so I've been actively reading everything I can get my hands on that deals with Objective-C / C++ (Cocoa, OpenGL, etc). But those are not the things I'm talking about. I know I need to "master" a language or two. What I'm talking about are the other "things". Things such as learning and using source control, design patterns, etc. What thing (or things, just one per response), would you say I should concurrently be focusing on? You can consider in your answer that I'm wanting to do the aforementioned career path, but you don't have to. I just want a nice list of things to research, and actually use in my career. Also, can someone help me with tagging this? I'm not sure what exactly would be a good tag for such a question.

    Read the article

  • Disabling monitor reconfiguration when closing lid

    - by Tomas
    I often need to move my laptop from one working place to another. When I do this, there are two events Ubuntu responds to by changing the monitor set up: Removing/attaching the VGA cable Closing/opening the lid of the laptop While removing the VGA cable gives me what I need (single screen, highest native resolution on the external screen if connected; otherwise highest resolution on the laptop), the laptop close/open lid response is not as good. Every time I close or open the lid, Ubuntu reconfigures the monitor set up. When I close the lid now... the screen goes black for a few seconds and it switches to clone, with my laptop screen disabled. Reopening results in... briefly a black screen, then the external monitor being used as desktop extension. Ubuntu thinks too much. My first and foremost question: Is there any way to let Ubuntu ignore lid close events? Ideally (or when there's no way to solve above question) I'd want to change how it deals with the screen reconfiguration. Why does Ubuntu toggle the screen configuration between external, clone and single display? Can't I just configure it to always use the external monitor, when present, in single screen mode? Note that similar questions have been asked before (most notably this one), but these have been closed perhaps wrongly. Any ideas are very welcome, I don't mind playing around a bit to see if something works.

    Read the article

  • How do you keep up with Nagios/Capistrano configs when using EC2?

    - by imaginative
    I use Amazon EC2 for my mobile app. Depending on load of the application at a given time, I might spawn new instances and then take them down when load is lower to save costs. How does one keep up with Nagios configurations for such a dynamic environment? When one deals with managed hardware, configuration files are predictable. In this case Nagios, Capistrano and a bunch of other configuration files would need to be added. Capistrano needs to know where to deploy a new build to for an app server. Nagios needs to know to remove an existing instance or add a new instance for monitoring. Nagios also needs to know if a node was intentionally taken down or if the host is down due to error. How is this done with the wonderful world of VPS/dynamic instances?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >