Search Results

Search found 5178 results on 208 pages for 'lost my wallet in el segundo'.

Page 144/208 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • Some Problems Can't Be Outsourced

    - by mikef
    More and more companies are becoming attracted to the idea of Infrastructure as a Service (or IaaS). It would seem that you can outsource the provisioning and management of your services, encompassing everything from Email, through to your servers, workstations and software, all the way down to your LAN and internet services. This type of outsourcing can be a very attractive option for companies who have tight budgets who are short of technical skills or don't have the means to provide long-term IT support. Essentially, they can outsource your services at low short-term costs that are knowable and controllable, are quickly and easily scalable, and generate a minimum of hassle for your internal staff. If you want to get a sophisticated IT infrastructure set up in a hurry without the usual high buy-in costs, or the task of finding and hiring the right specialists. It would seem the way to go, particularly when their salesmen are hypnotizing you with oleaginous phrases such as "we are closely aligned with our client organization's core business requirements, providing agile services". It sounds too good to be true, and so it is. Whereas the costs will have initially been calculated on the annual renewal fees and service fees for ongoing support, there are other charges too which aren't so obvious. It can end up costing far more than the conventional solution once you take into account the extra costs, the fees for customization and upgrades. The Total Cost of Ownership (TCO) only becomes apparent when it is too late to extract the company easily from the arrangement. After a few years, these annual fees can add up to more than the initial cost of implementing a traditional in-house system. Worse than that is that you can then lose your power to determine your priorities: When you become reliant on this company, with its own schedule of priorities, to implement every change, however simple, you have effectively lost control of your technical infrastructure. This will make senior management very nervous. There is definitely a requirement for this sort of service. If you urgently need an exceptionally high class of service or more expertise than you currently possess, then outsourcing is probably for you. You and your IT colleagues will always have something to do, be it user assistance, smoothing out integrations with an external provider, or working on something entirely new. Heck, if you outsource to IBM, the SysAdmins can go along for the ride and polish their expertise. What you need to figure out is how much your time is worth, because time is ultimately all that outsourcing will buy you and your organization. Now you just need to convince your nervous CEO. Cheers, Michael

    Read the article

  • Can a 10-bit monitor connection preserve all tones in 8-bit sRGB gradients on a wide-gamut monitor?

    - by hjb981
    This question is about color management and the use of a higher color depth, 10 bits per channel (30 bits in total, resulting in 1.07 billion colors, or 1024 shades of gray, sometimes referred to as "deep color") compared to the standard of 8 bits per channel (24 bits in total, 16.7 million colors, 256 shades of gray, sometimes referred to as "true color"). Do not confuse with "32 bit color", which usually refers to standard 8 bit color with an extra channel ("alpha channel") for transparency (used to achieve effects like semi-transparent windows etc). The following can be assumed to be in place: 1: A wide-gamut monitor that supports 10-bit input. Further, it can be assumed that the monitor has been calibrated to its native gamut and that an ICC color profile has been created. 2: A graphics card that supports 10-bit output (and is connected to the monitor via DisplayPort). 3: Drivers for the graphics card that support 10-bit output. If applications that support 10-bit output and color profiles would be used, I would expect them to display images that were saved using different color spaces correctly. For example, both an sRGB and an adobeRGB image should be displayed correctly. If an sRGB image was saved using 8 bits per channel (almost always the case), then the 10-bit signal path would ensure that no tonal gradients were lost in the conversion from the sRGB of the image to the native color space of the monitor. For example: If the image contains a pixel that is pure red in 8 bits (255,0,0), the corresponding value in 10 bits would be (1023,0,0). However, since the monitor has a larger color space than sRGB, sending the signal (1023,0,0) to the monitor would result in a red that was too saturated. Therefore, according to the ICC color profile, the signal would be transformed into a different value with less red saturation, for example (987,0,0). Since there are still plenty of levels left between 0 and 987, all 256 values (0-255) for red in the sRGB color space of the file could be uniquely mapped to color-corrected 10-bit values in the monitor's native color space. However, if the conversion was done in 8 bits, (255,0,0) would be translated to (246,0,0), and there would now only be 247 available levels for the red channel instead of 256, degrading the displayed image quality. My question is: how does this work on Ubuntu? Let's say that I use Firefox (which is color-aware and uses ICC color profiles). Would I get 10-bit processing, thus preserving all levels of an 8-bit picture? What is the situation like for other applications, especially photo applications like Shotwell, Rawtherapee, Darktable, RawStudio, Photivo etc? Does Ubuntu differ from other operating systems (Linux and others) on this point?

    Read the article

  • Avoiding the Black Hole of Leads

    - by Charles Knapp
    Sales says, "Marketing doesn’t deliver enough qualified leads. So, we generate 90% of our own leads." Meanwhile, Marketing says, "We generate most of the leads. But, Sales doesn’t contact them quickly enough, while the lead is still interested." According to Sirius Decisions: Up to 90% of leads never make it to closure Sales works on only 11% of the leads supplied by Marketing Only 18% of the leads Sales accepts convert to opportunities Yet, 45% of prospects typically buy a product from someone within 12 months The root cause of these commonplace complaints is a disconnect between the funnels of marketing and sales. Unfortunately, we often see companies with an assortment of poorly integrated marketing tools. It takes too long and too many people to move the data around, scrub it, upload it from one system to another, and get it routed to the right sales teams. As a result, leads fall through the cracks, contextual information is lost, and by the time sales actually contacts a customer it may be too late. Sales automation alone is not enough. Marketing automation (including social) is not enough. Sales and Marketing must work together. It’s time to connect the silos of marketing and sales pipelines and analytics. It’s time for integrated Sales and Marketing automation. Integrated pipelines improve lead quality and timeliness. Marketing systems can track a rich set of contextual information about a prospect–self-disclosed information about interests, content viewed, and so on. This insight can equip the sales rep with rich information to make a face-to-face conversation more relevant and more likely to convert to the next stage in the sales process. Integrated lead to revenue (LTR) management provides end-to-end visibility, enabling the company to measure what is working. Marketing can measure its impact on revenue and other business outcomes, and sales can harness and redirect marketing investments to areas where they most help achieve sales objectives. It’s a win-win play. Marketing delivers more leads that are qualified, cuts cost per lead, and demonstrates a strong Return on Marketing Investment (ROMI). Sales spends more time with warm leads and less time on cold calls, achieves higher close rates, and delivers more revenue. Learn more by attending our Integrated Sales and Marketing session at the upcoming CloudWorld conferences. Or, visit our Sales and Marketing Cloud Service site for videos and other learning resources.

    Read the article

  • Got Samba, Got PyNeighbourhood but still no connection. What else do I need?

    - by Frank A
    I am sure I had already hit post before but then could only find it by backing through browser. Was it deleted? is the question too dumb, sorry that I do not know the right jargon just trying to get answers to my problem anyway have reworded stuff a bit This seems to be a number one requirement for lots of people and 2 months on from setting up my Ubuntu pc, I am still unable to get a lasting connection in either direction. Adding a windows pc to a network is so easy... just a few clicks and get on with using it all. Using all command approaches and modifying configuration files is hardly user friendly. Googling brings up thousands of solutions but mostly they are too techy or assume the user is fully aware of how to use Linux. I do realise that their must be a lot of flavours for connecting to networks. So far I have installed Samba and fiddled with its config file. The day I did all that it worked from XP to Ubuntu. When I came back two days later to transfer my data over it would not connect. Although the the share does show up in Windows (XP) My Network Places. Today I installed PyNeighbourhood and this shows the Ubuntu box and all of the shares I had created at some point on Ubuntu and it even shows this under the XP workgroup name. But instructions on setting the connection up seem to relate to an earlier version and nothing seems to work there either. (I unshared most of those test folders but they still show up her but that is another question. When I click on mount- I can only click on one on the Ubuntu machine, there is one with no name so I assume this to be my attempt to add one XP Shared drive using ipaddress, I get errors. (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", (gksu:9767): Gtk-WARNING **: Unable to locate theme engine in module_path: "pixmap", mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) OK tried to find the manual referred to... only an old comment that manual would be produced for future versions. I saw in another thread that Winbind is needed as well or at least I assume as well? Totally lost again? Please help, what else needs to be installed to connect to win pcs on the network.

    Read the article

  • My own personal use of Oracle Linux

    - by wcoekaer
    It always is easier to explain something with examples... Many people still don't seem to understand some of the convenient things around using Oracle Linux and since I personally (surprise!) use it at home, let me give you an idea. I have quite a few servers at home and I also have 2 hosted servers with a hosted provider. The servers at home I use mostly to play with random Linux related things, or with Oracle VM or just try out various new Oracle products to learn more. I like the technology, it's like a hobby really. To be able to have a good installation experience and use an officially certified Linux distribution and not waste time trying to find the right libraries, I, of course, use Oracle Linux. Now, at least I can get a copy of Oracle Linux for free (even if I was not working for Oracle) and I can/could use that on as many servers at home (or at my company if I worked elsewhere) for testing, development and production. I just go to http://edelivery.oracle.com/linux and download the version(s) I want and off I go. Now, I also have the right (and not because I am an employee) to take those images and put them on my own server and give them to someone else, I in fact, just recently set up my own mirror on my own hosted server. I don't have to remove oracle-logos, I don't have to rebuild the ISO images, I don't have to recompile anything, I can just put the whole binary distribution on my own server without contract. Perfectly free to do so. Of course the source code of all of this is there, I have a copy of the UEK code at home, just cloned from https://oss.oracle.com/git/?p=linux-2.6-unbreakable.git. And as you can see, the entire changelog, checkins, merges from Linus's tree, complete overview of everything that got changed from kernel to kernel, from patch to patch, errata to errata. No obfuscating, no tar balls and spending time with diff, or go read bug reports to find out what changed (seems silly to me). Some of my servers are on the external network and I need to be current with security errata, but guess what, no problem, my servers are hooked up to http://public-yum.oracle.com which is open, free, and completely up to date, in a consistent, reliable way with any errata, security or bugfix. So I have nothing to worry about. Also, not because I am an employee. Anyone can. And, with this, I also can, and have, set up my own mirror site that hosts these RPMs. both binary and source rpms. Because I am free to get them and distribute them. I am quite capable of supporting my servers on my own, so I don't need to rely on the support organization so I don't need to have a support subscription :-). So I don't need to pay. Neither would you, at least not with Oracle Linux. Another cool thing. The hosted servers came (unfortunately) with Centos installed. While Centos works just fine as is, I tend to prefer to be current with my security errata(reliably) and I prefer to just maintain one yum repository instead of 2, I converted them over to Oracle Linux as well (in place) so they happily receive and use the exact same RPMs. Since Oracle Linux is exactly the same from a user/application point of view as RHEL, including files like /etc/redhat-release and no changes from .el. to .centos. I know I have nothing to worry about installing one of the RHEL applications. So, OL everywhere makes my life a lot easier and why not... Next! Since I run Oracle VM and I have -tons- of VM's on my machines, in some cases on my big WOPR box I have 15-20 VMs running. Well, no problem, OL is free and I don't have to worry about counting the number of VMs, whether it's 1, or 4, or more than 10 ... like some other alternatives started doing... and finally :) I like to try out new stuff, not 3 year old stuff. So with UEK2 as part of OL6 (and 6.3 in particular) I can play with a 3.0.x based kernel and it just installs and runs perfectly clean with OL6, so quite current stuff in an environment that I know works, no need to toy around with an unsupported pre-alpha upstream distribution with libraries and versions that are not compatible with production software (I have nothing against ubuntu or fedora or opensuse... just not what I can rely on or use for what I need, and I don't need a desktop). pretty compelling. I say... and again, it doesn't matter that I work for Oracle, if I was working elsewhere, or not at all, all of the above would still apply. Student, teacher, developer, whatever. contrast this with $349 for 2 sockets and oneguest and selfsupport per year to even just get the software bits.

    Read the article

  • The partition table is corrupt

    - by Tim
    I have a corrupt the partition table on the laptop that is running Ubunutu 10.4. Before the partition table was corrupt I had the following partitions: 2 primary partitions: 1st - NTFS 2nd - Extended 4 logical partitons that are built within 2nd extended: 1st NTFS (68 Gib) 2nd Linux (19 Gib) 3rd Swap (1.4 Gib) 4th Linux (24 Gib) The physical order of these partitions was the following: ( 4th Linux ) - ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) The logical order of the partition was different: ( 1st NTFS ) - ( 2nd Linux ) - ( 3rd Swap ) ( 4th Linux ) NTFS partition was big and it resided between 2 Linux partitions, neither of these partitions had enough space to install Oracle 11g. Therefore, I decided to a) either move the NTFS partion to the left or b) remove it completely and extend partition where Linux resides. As I tool I have chosen GParted. But unfortunately it was not able to move the partition because he found that in NTFS partition there are some blocks that are referenced multiple times. Also it was not able to remove the partition neither, because in this case the partitions that follow it ( 2nd Linux ) - ( 3rd Swap ) have to be in his opinion also removed, because the organization of extended partition is a linked list. Since GParted was not able to do such thing I was trying to find another tool. I found diskdrake tool on PSLinuxOS distribution of linux. That tool silently deleted ( 1st NTFS ) partition and I thought that everything was fine. But diskdrake has damaged the partition in a way that I am not able either to boot from the hard disk nor to see the partitions with GParted and even with diskdrake itself! Fortunately I have a live CD of Ubuntu 8.10 and I am able to boot and see hard disk. I have 2 ideas how I can solve the problem: 1) Manually change disk partitions and point them to the correct partitions. 2) Create partition table with GParted that as much as possible is the same with the previous one I find the 2nd approach less time consuming but some data will be lost because of it is not possible to place borders of the partitions exactly how it was before. And moreover I am not sure if such approach would work, for example, if the OS is able to locate files after repartitioning. I feel like that it will but not 100% sure. Are there some ideas how the problem may be solved?

    Read the article

  • Link instead of Attaching

    - by Daniel Moth
    With email storage not being an issue in many companies (I think I currently have 25GB of storage on my email account, I don’t even think about storage), this encourages bad behaviors such as liberally attaching office documents to emails instead of sharing a link to the document in SharePoint or SkyDrive or some file share etc. Attaching a file admittedly has its usage scenarios too, but it should not be the default. I thought I'd list the reasons why sharing a link can be better than attaching files directly. In no particular order: Better Review. It allows multiple recipients to review the file and their comments are aggregated into a single document. The alternative is everyone having to detach the document, add their comments, then send back to you, and then you have to collate. Wirth the alternative, you also potentially miss out on recipients reading comments from other recipients. Always up to date. The attachment becomes a fork instead of an always up to date document. For example, you send the email on Thursday, I only open it on Tuesday: between those days you could have made updates that now I am missing because you decided to share a link instead of an attachment. Better bookmarking. When I need to find that document you shared, you are forcing me to search through my email (I may not even be running outlook), instead of opening the link which I have bookmarked in my browser or my collection of links in my OneNote or from the recent/pinned links of the office app on my task bar, etc. Can control access. If someone accidentally or naively forwards your link to someone outside your group/org who you’d prefer not to have access to it, the location of the document can be protected with specific access control. Can add more recipients. If someone adds people to the email thread in outlook, your attachment doesn't get re-attached - instead, the person added is left without the attachment unless someone remembers to re-attach it. If it was a link, they are immediately caught up without further actions. Enable Discovery. If you put it on a share, I may be able to discover other cool stuff that lives alongside that document. Save on storage. So this doesn't apply to me given my opening statement, but if in your company you do have such limitations, attaching files eats up storage on all recipients accounts and will also get "lost" when those people archive email (and lose completely at some point if they follow the company retention policy). Like I said, attachments do have their place, but they should be an explicit choice for explicit reasons rather than the default. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Legitimate use of the Windows "Documents" folder in programs.

    - by romkyns
    Anyone who likes their Documents folder to contain only things they place there knows that the standard Documents folder is completely unsuitable for this task. Every program seems to want to put its settings, data, or something equally irrelevant into the Documents folder, despite the fact that there are folders specifically for this job1. So that this doesn't sound empty, take my personal "Documents" folder as an example. I don't ever use it, in that I never, under any circumstances, save anything into this folder myself. And yet, it contains 46 folders and 3 files at the top level, for a total of 800 files in 500 folders. That's 190 MB of "documents" I didn't create. Obviously any actual documents would immediately get lost in this mess. My question is: can anything be done to improve the situation sufficiently to make "Documents" useful again, say over the next 5 years? Can programmers be somehow educated en-masse not to use it as a dumping ground? Could the OS start reporting some "fake" location hidden under AppData through the existing APIs, while only allowing Explorer and the various Open/Save dialogs to know where the "real" Documents folder resides? Or are any attempts completely futile or even unnecessary? 1For the record, here's a quick summary of the various standard directories that should be used instead of "Documents": RoamingAppData for user-specific data and settings. This is the directory to use for user-specific non-temporary data. Anything placed here will be available on any machine that a given user logs on to in networks where this is configured. Do not place large files here though, because they slow down login/logout in such environments. LocalAppData for user-and-machine-specific data and settings. This data differs for every user and every machine. This is also where very large user-specific data should be placed. ProgramData for machine-specific data and settings. These are the same regardless of which user is logged on, and will not roam to other machines in a network. GetTempPath for all files that may be wiped without loss of data when not in use. This is also the place for things like caches, because like temporary data, a cache does not need to be backed up. Place your huge cache here and you'll save your user some backup trouble. "Documents" itself should only ever be used if the user specified it manually by entering a path or selecting it in a Save dialog. That is the only time it is ever appropriate to save stuff in "Documents".

    Read the article

  • Pixels - A cry for some insight

    - by CarrotFile
    I'm pretty new to web developing and I'd love some clarification. Although reading more than one book on the topic, I cannot seem to wrap my head around the pixel concept. I encounter problems with this issue when trying to use CSS and pixel units for design that fits different screen sizes. To my understanding a pixel is the most basic unit used by a monitor in order to compose an image on the screen. So if me resolution is 800 by 600, everything on my screen is rendered using those 800*600 basic building blocks. If I were to enlarge my screen resolution, 3 things would accrue: A. The basic image building block(the pixel) would shrink in size B. The pixels would move close together C. Well, more pixels would now be available All these combined lead to a sharper(depending on the viewing distance) and more detail enabling image. Well so far so good. Here is were I start getting lost: To my knowledge a pixel is not a physical, real object. Monitors are not embedded with a few thousand pixels. I am drawn to this conclusion because anyone can change his screen's resolution, making a pixel on his screen bigger or smaller, and adding or subtracting the amount of total pixels on screen. Adding to that, I have herd that different monitors have different pixel densities. For example Apple's retina monitors. Taking all of the above as my knowledge base, These are my questions: If a pixel has no real world constant size, what does comparing different pixel densities matter? Each screen company can define it's own pixel concept and declare the higher density. What does a bigger pixel density mean? Say we take two screens with the same physical dimensions, but with a different pixel density, am I to assert that the main difference would be the larger density screen being able to display a higher max resolution? Or am I to assert that given the same resolution on both monitors, the higher density one would display a sharper, smaller image? If a pixel is not a fixed size within one monitor, is it a fixed size between the same resolution on two different monitors? For example, would two different monitors, set to the same resolution, be comprised of same size, same quantity pixels? I'd love some help (:

    Read the article

  • Powerful Lessons in Data from the Presidential Election

    - by Christina McKeon
    Now that we’ve had a few days to recover from the U.S. presidential election, it’s a good time to take a step back from politics and look for the customer experience lessons that we can take away. The most powerful lesson is that when you know more about your base, you will have an advantage over your competition. That advantage will translate into you winning and your competition losing. Michael Scherer of TIME was given access to Obama’s data analysts two days before the election. His account is documented in Inside the Secret World of the Data Crunchers Who Helped Obama Win. What we learned from Scherer’s inside view is how well Obama’s team did in getting the right data, analyzing it, and acting on it. This data team recognized how critical it was to break down data silos within the campaign. As Scherer noted, they created “a single system that merged information from pollsters, fundraisers, field workers, consumer databases, and social-media and mobile contacts with the main Democratic voter files in the swing states.” The Obama analysis was so meticulous that they knew which celebrity and which type of celebrity event would help them maximize campaign contributions. With a single system, their data models became more precise. They determined which messages were more successful with specific demographic groups and that who made the calls mattered. Data analysis also led to many other changes in Obama’s campaign including a new ad buying strategy, using social media and applications to tap into supporters’ friends, and using new social news sites. While we did not have that same inside view into Romney’s campaign, much of the post-mortem coverage indicates that Romney’s team did not have the right analysis. As Peter Hamby of CNN wrote in Analysis: Why Romney Lost, “Romney officials had modeled an electorate that looked something like a mix of 2004 and 2008….” That historical data did not account for the changing demographics in the U.S. Does your organization approach data like the Obama or Romney team? Do you really know your base? How well can you predict what is going to happen in your business? If you haven’t already put together a strategy and plan to know more, this week’s civics lesson is a powerful reason to do it sooner rather than later. Your competitors are probably thinking the same thing that you are!

    Read the article

  • Homepage issue on Google [closed]

    - by nico
    We have recently updated our website www.blinds4uk.co.uk with a new homepage containing additional features and more on page content but since then we have lost primary keyword positions and the home page has disappeared completely. The only time it appears is for an exact search ‘blinds4uk’. Today I took snippets of 'unique content' from the homepage and put this into the 'google search' but our homepage was nowhere to be found. When I did the same in ‘Yahoo’ the homepage came up. Are we missing something? We ranked in the top 4 for primary kw terms 'blinds uk' & 'uk blinds' but now we don’t show anywhere for these terms. Our homepage has never ranked well for the primary kw 'blinds' yet our internal pages rank very well with many pages on page 1 of google uk. We employed an SEO firm for 9 months to help us establish issues with the homepage but they never could, so we got rid. We have been trying to get to the root cause of why the homepage ranks so poorly for a number of years and only yesterday we established that we had the meta tag directly below the tag and our title & meta description were further down the page; we have today corrected. Not sure what effect this would have on the way Google reads the homepage but we are trying everything to try and get the homepage ranking fro those primary kw's. Our current developers & ex SEO guys are all part of the same company and cannot pin-point anything other than saying carryon with their SEO team because it will take time but just comes across as a milking exercise. Another thing which I have found very strange is the data from our 'traffic audience'. We are a UK based website yet our traffic stats were showing as;- UK 36.6%, Denmark 35.8% and India 27.6%. – don’t make sense to me! Is there anybody out there that could simply point us in the right direction to the problem(s), so we can fix once and for all? Could there be anything within the code that is causing the home page not to display within google for our primary kw's terms such as blinds, window blinds etc. I would appreciate any advice at all that may help us in our quest to sort this homepage issue once and for all

    Read the article

  • SMTP POP3 & PST. Acronyms from Hades.

    - by mikef
    A busy SysAdmin will occasionally have reason to curse SMTP. It is, certainly, one of the strangest events in the history of IT that such a deeply flawed system, designed originally purely for campus use, should have reached its current dominant position. The explanation was that it was the first open-standard email system, so SMTP/POP3 became the internet standard. We are, in consequence, dogged with a system with security weaknesses so extreme that messages are sent in plain text and you have no real assurance as to who the message came from anyway (SMTP-AUTH hasn't really caught on). Even without the security issues, the use of SMTP in an office environment provides a management nightmare to all commercial users responsible for complying with all regulations that control the conduct of business: such as tracking, retaining, and recording company documents. SMTP mail developed from various Unix-based systems designed for campus use that took the mail analogy so literally that mail messages were actually delivered to the users, using a 'store and forward' mechanism. This meant that, from the start, the end user had to store, manage and delete messages. This is a problem that has passed through all the releases of MS Outlook: It has to be able to manage mail locally in the dreaded PST file. As a stand-alone system, Outlook is flawed by its neglect of any means of automatic backup. Previous Outlook PST files actually blew up without warning when they reached the 2 Gig limit and became corrupted and inaccessible, leading to a thriving industry of 3rd party tools to clear up the mess. Microsoft Exchange is, of course, a server-based system. Emails are less likely to be lost in such a system if it is properly run. However, there is nothing to stop users from using local PSTs as well. There is the additional temptation to load emails into mobile devices, or USB keys for off-line working. The result is that the System Administrator is faced by a complex hybrid system where backups have to be taken from Servers, and PCs scattered around the network, where duplication of emails causes storage issues, and document retention policies become impossible to manage. If one adds to that the complexity of mobile phone email readers and mail synchronization, the problem is daunting. It is hardly surprising that the mood darkens when SysAdmins meet and discuss PST Hell. If you were promoted to the task of tormenting the souls of the damned in Hades, what aspects of the management of Outlook would you find most useful for your task? I'd love to hear from you. Cheers, Michael

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

  • Adding JavaScript to your code dependent upon conditions

    - by DavidMadden
    You might be in an environment where you code is source controlled and where you might have build options to different environments.  I recently encountered this where the same code, built on different configurations, would have the website at a different URL.  If you are working with ASP.NET as I am you will have to do something a bit crazy but worth while.  If someone has a more efficient solution please share. Here is what I came up with to make sure the client side script was placed into the HEAD element for the Google Analytics script.  GA wants to be the last in the HEAD element so if you are doing others in the Page_Load then you should do theirs last. The settings object below is an instance of a class that holds information I collection.  You could read from different sources depending on where you stored your unique ID for Google Analytics. *** This has been formatted to fit this screen. *** if (!IsPostBack) { if (settings.GoogleAnalyticsID != null || settings.GoogleAnalyticsID != string.Empty) { string str = @"//<!CDATA[ var _gaq = _gaq || []; _gaq.push(['_setAccount', '"  + settings.GoogleAnalyticsID + "']); _gaq.push(['_trackPageview']);  (function () {  var ga = document.createElement('script');  ga.type = 'text/javascript';  ga.async = true;  ga.src = ('https:' == document.location.protocol  ? 'https://ssl' :  'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0];  s.parentNode.insertBefore(ga, s);})();"; System.Web.UI.HtmlControls.HtmlGenericControl si =  new System.Web.UI.HtmlControls.HtmlGenericControl(); si.TagName = "script"; si.Attributes.Add("type", @"text/javascript"); si.InnerHtml = sb.ToString(); this.Page.Header.Controls.Add(si); } } The code above will prevent the code from executing if it is a PostBack and then if the ID was not able to be read or something caused the settings to be lost by accident. If you have larger function to declare, you can use a StringBuilder to separate the lines. This is the most compact I wished to go and manage readability.

    Read the article

  • 'Unable to mount Filesystem' Error

    - by Charles
    Trying to extract data from a 'bricked' Western Digital MyBook Live 2tb drive. I came across a forum that advised to use Ubuntu (booted from a CD) on my Macbook. Managed to download and create a boot CD for Ubuntu (like this little operating system btw). Booted the machine with the CD and plugged the drive (which I had extracted from it's casing and placed into a external USB SATA case & plugged to the laptop). The drive is seen by Ubuntu but each time I click on the drive, it gives me the following error: Unable to mount 2.0 TB Filesystem Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sdb4, missing codepage or helper program, or other error In some cases useful info is found in syslog -try dmesg | tail or so I am new to this and spent quite some time searching this site to see if I could find a solution to this problem without troubling anyone. I came up with a few that came close but some of the questioners mentioned that they had lost data...which scared me from going further. I need to basically extract 1 particular folder from the drive. If I can get to mount this volume 'sdb4', there is a folder called 'My_Work' which I need to back up. The rest I have/had a copy of. When I typed in dmesg | tail...I got several lines..but I think ones that are relevant are: [ 406.864677] EXT4-fs (sdb4): bad block size 65536 [ 429.098776] hfs: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only [ 439.786365] hfs: write access to a journaled filesystem is not supported, use the force option at your own risk, mounting read-only [ 445.982692] EXT4-fs (sdb4): bad block size 65536 [ 1565.841690] EXT4-fs (sdb4): bad block size 65536 I read somewhere to try/check 'sudo fdisk -l /dev/sdb4'. It gave me the following result: Disk /dev/sdb44: 1995.8 GB, 1995774623744 bytes 255 heads, 63 sectors/track, 242639 cylinders, total 3897997312 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/sdb4 doesn't contain a valid partition table This is where I reached and got frustrated and decided to try & get help on this without digging myself deeper into a hole! I understand that the answer may already be out there. If so, could someone please point me in the right direction. And if not, could someone please resolve (if possible) my situation!

    Read the article

  • How do you turn on the customizable gnome-panel features (like gnome-applets) in Precise?

    - by chriv
    I resurrected a broken laptop today. I took out the HDD, put it in a USB 3.0 enclosure, and created a VM that would use it. It was running lucid. I took a screenshot of the desktop before I started "do-release-upgrade", because from experience, I will never have my GUI back the way I want it again. I know how to install gnome-panel to get back the "Gnome Classic" session option. I know how to put my minimize, maximize, and close buttons back in the upper-right hand corner of windows (where they belong). I know how to use gdm instead of lightdm. Unity gets worse in every version (and the other desktop OS is going to be even worse with Metro). Here's what I don't know (in order of importance): 1. How do you make the panels in gnome (gnome-panel, to be precise) customizable again (like they were in older versions of Ubuntu)? 2. How do you install applets in the panels now (right-click is now ignored)? 3. How can you customize all of the window elements (like you could in older versions of Ubuntu)? I can't remember much about maverick, natty, or oneiric (except their names), so I don't know exactly when I lost these capabilities. Edit: (no screenshot), my StackExchange reputation (on other StackExchange sites) doesn't carry over to this site, so I can't post the screenshot. Take a look at the panels in the screen hot. They are nice, compact, and VERY functional (disk mounter applet, frequently used shortcuts, workspaces, show desktop, kill window, and trash icons, etc.) Notice how small the fonts (and how little real estate they waste). You can't notice the compact title bars, fonts, and window icons in this screen shot (since I redacted the rest of the desktop), but it's the same story there. Please help. I don't want to learn another distro, but Ubuntu gets less customizable with every "upgrade." Screenshot (not an inline image, since I don't have the reputation yet)... i.stack.imgur.com/puoUT.png

    Read the article

  • Object Oriented Design of a Small Java Game

    - by user2733436
    This is the problem i am dealing with. I have to make a simple game of NIM. I am learning java using a book so far i have only coded programs that deal with 2 classes. This program would have about 4 classes i guess including the main class. My problem is i am having a difficult time designing classes how they will interact with each other. I really want to think and use a object oriented approach. So the first thing i did was design the Pile CLASS as it seemed the easiest and made the most sense to me in terms of what methods go in it. Here is what i have got down for the Pile Class so far. package Nim; import java.util.Random; public class Pile { private int initialSize; public Pile(){ } Random rand = new Random(); public void setPile(){ initialSize = (rand.nextInt(100-10)+10); } public void reducePile(int x){ initialSize = initialSize - x; } public int getPile(){ return initialSize; } public boolean hasStick(){ if(initialSize>0){ return true; } else { return false; } } } Now i need help in designing the Player Class. By that i mean i am not asking for anyone to write code for me as that defeats the purpose of learning i was just wondering how would i design the player class and what would go on it. My guess is that the player class would contain method for choosing move for computer and also receiving the move human user makes. Lastly i am guessing in the Game class i am guessing the turns would be handeled. I am really lost right now so i was wondering if someone can help me think through this problem it would be great. Starting with the player class would be appreciated. I know there are some solutions for this problem online but i refuse to look at because i want to develop my own approach to such problems and i am confident if i can get through this problem i can solve other problems. I apologize if this question is a bit poor but in specific i need help in designing the Player class.

    Read the article

  • Rolling With the Punches

    - by D'Arcy Lussier
    So I’ve been tweeting the last little while “Rolling with the punches” and I’ve had some people ask me what that meant. Whether you’re running a conference (like I am this week), or a project, or a birthday party for a 2 year old, you need to be ready to handle those things that are unexpected. Risk mitigation can only go so far and its at those times that you need to become resourceful. So let me tell you what the last few days have been like. Today is the first day of Prairie Dev Con Winnipeg, a conference that I run. On Friday I was informed that my keynote speaker had lost his voice, one of my speakers had a family emergency and had to back out, and I got a warning from another that he was travelling over the weekend and if there was a storm or something he may not be able to get back by Monday for his talk. A storm didn’t happen, but their car did break down and he was delayed. Finally, Saturday night I took my printing order to Staples. It was at 5 and they closed at 6, and I had a bunch of surveys to be printed and cut. The girl working said that she’d have it ready by the next day (Sunday). Her intent was to come in the next morning and finish the job. Unfortunately, she had to be hospitalized that night and never made it into work…and never informed anyone of the remaining work. They found out at 3pm when I came to pick it up and there was no way they’d be able to cut everything in time. So how did we roll with these punches? - Miguel, my keynote speaker, was a trooper and was able to do the keynote but asked that his session get moved from Monday to Tuesday. This is why I wait until the last day before printing out schedules, they can change up to the event and even later. - I was able to move some sessions around to accommodate my stranded speaker and fill the empty slot from the speaker that couldn’t make it. - Staples was able to get me half the cut surveys so I took those and my wife will pick up the rest today. I altered how we’d collect session surveys, and actually I think it’ll work better. So all of this is to say, plan but also plan for what you can’t plan for – there will be things that happen that blindside you, that you’re not sure how to handle or solve. Stop, take a deep breath, and don’t feel that you need to limit yourself to the boundaries that you initially set for yourself. Roll with the punch and learn from it so that you can avoid the blow next time. Now, back to the conference! D

    Read the article

  • A better way to organize your Silverlight Code Snippets.

    - by mbcrump
    I hate re-writing code. I also hate it when I find a great code snippet on the web and forget to bookmark it or it gets lost in my endless sea of bookmarks. So what do you do to get around this? This is the question that I was asking myself at the end of 2010. How can I get my Silverlight code organized? My requirements for a snippet manager were: Needs to be FREE. An easy way to view XAML/C# code behind together in one “view”. I wanted the ability to store the code snippets in cloud in case my HDD dies. Searchable Keywords to quickly find code snippets. I started looking for a snippet manager that would allow me to do just that and finally found Snippet Manager. Before going any further, I think that one of the most important things to note here is that this software supports 37 languages. It’s not just for Silverlight developers nor C# only guys. The software supports Java, SQL and even COBOL.   Below is a screenshot of the Snippet Manager that shows my Silverlight code snippet. You will notice that I have highlighted two sections. The top part is my XAML and the bottom is my C# code behind. I’ve included a sample below of my code snippets so that you can get an idea of how I organized it. Another thing that’s great about this software is that it supports plain text. I added some connection strings in the TEXT section below.  Once you have finished adding your code snippets, you can store them in the cloud. I created a FTP directory called “snippets” on my FTP Server and hit the upload button once I am finished adding my new codes snippets. This will allow me to use the code snippets on another computer with this application on my USB Key. See screenshots below: Enter your FTP credentials below: Hit the Uploads button on the Toolbar: Login in to your FTP Server and verify the following files are now on the FTP Server: Another great feature of the Snippet Manager is that you can also integrate this into VS2010 by clicking Tools –> External Tools: And setting up your External Screen to point to the Executable: You can now launch it by going to Tools –> Snippet Manager. If you want you could also a shortcut to launch the program with HotKeys. As you can see, this is a nice little program that includes everything needed to organize your code snippets very clean. I didn’t go over every feature but this is something that you might want to download and give it a shot.  Subscribe to my feed CodeProject

    Read the article

  • dual boot ubuntu installation mishap

    - by user590849
    I have Windows 7 pc ,where i had 2 partitions, a c drive for my system files and a d drive for my data. I decided to install ubuntu 11.10 a couple of days ago and thought of install it in a separate partition of its own. So i made a separate Linux partition of 30GB. I downloaded ubuntu on my usb stick and installed. During the installation process i was asked where to install ubuntu so i opened up a screen that was similar to this one There were six partitions present ( I had made only 3 partition via windows). Their names were totally different from the ones that i had given in windows. So i selected a drive which had the same size as my Linux partition that i had made in windows ( no other partition had the same size). I clicked on install now and got an error message saying that "There was no root folder set". I set the newly made partition as my root folder and clicked install now. Now out of the 6 partitions that were created 3 were logical ( i had only created 3 partitions in windows). As soon as i clicked install now, the system asked me where i wanted to put my "swap space". I selected one of the logical drives and hit install. Ubuntu successfully installed on my system and at the end it asked me to reboot. I did and got the following error message: "missing operating system". I was shocked. I tried my windows recovery disk ( that i had gotten when i had purchased my laptop) and there i went into startup repair. In the startup repair option i was not able to locate windows. The system asked me to click the "Load drivers" button to load the drivers to my harddrive where windows was installed, but i could not locate any drivers to my harddrive. I tried this several times but to no success. I panicked and installed ubuntu, now this time click "ok" at every step( not worrying about the partition and all). The os installed correctly and i am now able to access my harddrive. NO data within the c drive is lost. All the windows system files are intact. I wish to recover my windows installation. How do i go about it? Thank you in advance. I do not want to format my computer and install windows again.

    Read the article

  • New laptop, Windows 8.1, attempting dual install. Ubuntu installer doesn't 'see' existing OS

    - by Flaminica
    Though I've used Ubuntu for a few years, I'm new to installation. Previously I had help and now I'm doing it alone (moved across the world). Windows 8.1 came preinstalled on my new laptop (Toshiba Satellite C70-A-17C - Core i5, 8 GB RAM, 750 GB HDD). I have already followed a few steps I found online to prepare for a dual install (with Ubuntu 14.04). I backed up Windows, created a bootable Ubuntu USB and DVD (just in case one didn't work), turned off fast boot and secure boot, and shrunk C:/. The new unallocated drive portion is 292.97 GB. After shrinking C:/, I restarted Windows a couple of times to make sure everything was working fine (it is). I then attempted to install with the Ubuntu live USB. However, the Ubuntu installer doesn't see that Windows 8.1 is already installed. I don't understand, and don't want to mess with Ubuntu partitioning when I don't know where the partitions will be created. My concern is that, if I go further with the installation process, Windows might be overwritten or compromised in some way. I then tried to reboot using the Ubuntu live DVD, thinking I might get a different result. However, I can't figure out how to make the laptop boot from the CD drive. I went into the BIOS and found no option there, either. Any help is very appreciated! EDIT: Looks like I can't link directly to each photo. Here is my album of screenshots: http://imgur.com/a/zChCo Here you can see that there's no option to boot from CD drive, only USB. Everything looks okay so far. I don't understand this. Ubuntu has not yet been installed. Unmounting partitions? (I chose 'no'.) Even though the laptop came pre-installed with Windows 8.1, the Ubuntu USB installer can't see it. I chose 'something else'. I need to pick and format partitions. I scrolled down and took a second shot to include all information. Completely lost and cancelled installation.

    Read the article

  • iOS app with a lot of text

    - by rdurand
    I just asked a question on StackOverflow, but I'm thinking that a part of it belongs here, as questions about design pattern are welcomed by the faq. Here is my situation. I have developed almost completely a native iOS app. The last section I need to implement is all the rules of a sport, so that's a lot of text. It has one main level of sections, divided in subsections, containing a lot of structured text (paragraphs, a few pictures, bulleted/numbered lists, tables). I have absolutely no problem with coding, I'm just looking for advice to improve and make the best design pattern possible for my app. My first shot (the last one so far) was a UITableViewController containing the sections, sending the user to another UITableViewController with the subsections of the selected section, and then one strange last UITableViewController where the cells contain UITextViews, sections header help structure the content, etc. What I would like is your advice on how to improve the structure of this section. I'm perfectly ready to destroy/rebuild the whole thing, I'm really lost in my design here.. As I said on SO, I've began to implement a UIWebView in a UIViewController, showing a html page with JQuery Mobile to display the content, and it's fine. My question is more about the 2 views taking the user to that content. I used UITableViewControllers because that's what seemed the most appropriate for a structured hierarchy like this one. But that doesn't seem like the best solution in term of user experience.. What structure / "view-flow" / kind of presentation would you try to implement in my situation? As always, any help would be greatly appreciated! Just so you can understand better the hierarchy, with a simple example : -----> Section 1 -----> SubSection 1.1 -----> Content | -----> SubSection 1.2 -----> Content | -----> SubSection 1.3 -----> Content | | | UINavigationController -------> Section 2 -----> SubSection 2.1 -----> Content | -----> SubSection 2.2 -----> Content | -----> SubSection 2.3 -----> Content | -----> SubSection 2.4 -----> Content | -----> SubSection 2.5 -----> Content | -----> Section 3 -----> SubSection 3.1 -----> Content -----> SubSection 3.2 -----> Content |------------------| |--------------------| |-------------| 1 UITableViewController 3 UITableViewControllers 10 UIViewControllers (3 rows) (with different with a UIWebView number of rows)

    Read the article

  • Is the Observer pattern adequate for this kind of scenario?

    - by Omega
    I'm creating a simple game development framework with Ruby. There is a node system. A node is a game entity, and it has position. It can have children nodes (and one parent node). Children are always drawn relatively to their parent. Nodes have a @position field. Anyone can modify it. When such position is modified, the node must update its children accordingly to properly draw them relatively to it. @position contains a Point instance (a class with x and y properties, plus some other useful methods). I need to know when a node's @position's state changes, so I can tell the node to update its children. This is easy if the programmer does something like this: @node.position = Point.new(300,300) Because it is equivalent to calling this: # Code in the Node class def position=(newValue) @position = newValue update_my_children # <--- I know that the position changed end But, I'm lost when this happens: @node.position.x = 300 The only one that knows that the position changed is the Point instance stored in the @position property of the node. But I need the node to be notified! It was at this point that I considered the Observer pattern. Basically, Point is now observable. When a node's position property is given a new Point instance (through the assignment operator), it will stop observing the previous Point it had (if any), and start observing the new one. When a Point instance gets a state change, all observers (the node owning it) will be notified, so now my node can update its children when the position changes. A problem is when this happens: @someNode.position = @anotherNode.position This means that two nodes are observing the same point. If I change one of the node's position, the other would change as well. To fix this, when a position is assigned, I plan to create a new Point instance, copy the passed argument's x and y, and store my newly created point instead of storing the passed one. Another problem I fear is this: somePoint = @node.position somePoint.x = 500 This would, technically, modify @node's position. I'm not sure if anyone would be expecting that behavior. I'm under the impression that people see Point as some kind of primitive rather than an actual object. Is this approach even reasonable? Reasons I'm feeling skeptical: I've heard that the Observer pattern should be used with, well, many observers. Technically, in this scenario there is only one observer at a time. When assigning a node's position as another's (@someNode.position = @anotherNode.position), where I create a whole new instance rather than storing the passed point, it feels hackish, or even inefficient.

    Read the article

  • 12.04 Booting into Terminal

    - by user170796
    To preface this, I would like to say that I am completely new to Ubuntu and have essentially zero programming experience/experience working with command line and terminal. I installed Ubuntu because I would like to get into programming. If you could provide me with the simplest instructions possible, I would be grateful. I have a Lenovo Ideapad Y500 (Intel i7, NVidia GT 750m, 1TB HDD, 16GB SSD cache, 8GB RAM) with Windows 8 on it. Using a Live CD, I installed Ubuntu 12.04 onto a 75 GB partition. During the installation, I kept all default settings except for one thing; I decided to encrypt my home folder, and so checked the corresponding box. The installation completed, and I restarted. Once I restarted, I saw the options "Ubuntu, with Linux 3.2.0-23-generic" "Ubuntu, with Linux 3.2.0-23-generic (recovery mode)" "Memory test (memtest86+)" "Memory test (memtest86+, serial console 115200)" "Windows Recovery Environment (loader) (on /dev/sdb3)" "Windows 8 (loader) (on /dev/sdb5)" "System Setup" I chose the first option, and was directed to a screen with the Ubuntu logo and the row of five dots below that change from orange to white. Then, I was brought to a full screen terminal that prompted me to login, which I did. I saw no option to boot into GUI at all, and am lost. I've been searching around and have tried the "startx" command to no avail. Should the command have some sort of context or something? I've also tried selecting the recovery mode option from the boot manager. I've tried the resume option from the following menu, which eventually just shuts down the computer after displaying a lot of scrolling text that's too fast for me to read. I've also tried the failsafex mode from the recovery mode menu, which only brings up a terminal box at the bottom of the window that covers the entire bottom part of the screen. Commands won't work in this window. When I try to access Windows 8, I get a message saying that the EFI file path was not specified or something along those lines. I had to enable Secure Boot in order to access Windows 8 (I had disabled it to be able to boot from the Live CD), which is functioning normally. I am at a complete loss for what to do. Any help will be extremely appreciated. EDIT: Bonus question! If you could figure out a way for me to boot to Windows 8 without having to enable Secure Boot, it would save me a lot of trouble. I can deal with switching every time, but I'd rather not have to.

    Read the article

  • Developing a cloud based app

    - by user134897
    I am a company owner that has developed a cloud based app. My code writer has told me how good he is more than once, well, better stated, he did a good job telling me he was better than everyone else in my rather small community. In the last 18 months I have spent nearly 160,000.00 dollars trying to get this company to the "making money" stage. I am now nearly broke, sitting on the edge of a brilliant marketing plan to launch a much needed cloud based app. We did launch our app last year (late 2013), and the feedback was amazing from the users. One user that signed up to use the free app stated that we needed to call him the moment our company goes public because he wants to be the first to buy stock. Now, here's my problem. We did not originally set out to develop a freemium app, we just sort of ended up there by the natural progression of the app. So, now I have an app that really needs to be scrapped and re-built. Although I do feel my code writer has displayed some brilliance in what he has done, he was extremely weak on graphics and every time we speak he tells me there is a newer better way to code that he is trying to learn. So, here's the million dollar question. Ho do I find code writers that already know the newest, best ways to write code? Or maybe better asked, what is the newest best code writing technique? Second, is it even possible to find code writers that are good at graphics? In short, I am nearly broke and need to start over, but I do not know where to find people qualified to write it good the first time around and display good graphic skills. I am trying to build a team of writers instead of just one person. Maybe 3 good at code and two good at graphics, but I am clueless as to what criteria I should use to determine if I am building the right team members. Please help, I am sure you can tell I am fairly lost by my continued rambling.

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >