Search Results

Search found 7463 results on 299 pages for 'personal computers'.

Page 240/299 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • 3 Monitors, Ubuntu 12.04, Gnome 3, 2 nvidia cards, WITH xrandr or xinerama?

    - by Josh
    Ok. I have been banging my head against the wall for over a week now trying to get 3 monitors to work. I have: 1 - Nvidia 8600 GT 512MB PCIEx16 1 - Nvidia GT 240 1GB PCIEx16 They are not running in SLI (obviously). I have tried to use everything from tutorials to a few templates, all the way up to nvidia-settings, etc etc etc.. From what I hear, Xinerama doesnt like gnome 3 because of the compositing, although I have read a lot about using Xrandr instead, and getting the compositing working, but alas, I cannot. It always either crashes X and I have to replace the xorg.conf with my backup, or it defaults to the gnome-classic desktop, and on top of that, when it does default, it keeps adding more and more panels. Basically, I want to be able to use all 3 monitors (yes, just like in windows) to drag and drop from different windows. I have xorg-edit, but I am still not too sure on how to set this up? Is there any way to: A Get compositing working with 3 monitors, 2 nvidia cards, xinerama, and gnome 3? or BUse twinview with 3 monitors (I have heard it can be done by manually editing xorg.conf) or CSet up Xrandr to draw all 3 monitors with compositing. or DUse separate X for each monitor, and be able to use gnome with compositing, as well as drag between all 3 or EANYTHING. lol. I just want this to work. Any help you can provide would be greatly appreciated. BTW, I am running an ubuntu mini install with gnome. Everything works great but this. I can run it fine with 2 monitors and compositing, but not with 3. Also, what is the best GUI tool for editing xorg.conf? Im not finding anything that is up to date at all, and also is understandable by humans. haha. Im actually an engineer by trade, and have been working with computers a LONG time, but this xorg.conf stuff is really confusing the crap out of me. lol Thanks!

    Read the article

  • Cloud Adoption Challenges

    - by Herve Roggero
    Originally posted on: http://geekswithblogs.net/hroggero/archive/2013/11/07/cloud-adoption-challenges.aspxWhile cloud computing makes sense for most organizations and countless projects, I have seen customers significantly struggle with cloud adoption challenges. This blog post is not an attempt to provide a generic assessment of cloud adoption; rather it is an account of personal experiences in the field, some of which may or may not apply to your organization. Cloud First, Burst? In the rush to cloud adoption some companies have made the decision to redesign their core system with a cloud first approach. However a cloud first approach means that the system may not work anymore on-premises after it has been redesigned, specifically if the system depends on Platform as a Service (PaaS) components (such as Azure Tables). While PaaS makes sense when your company is in a position to adopt the cloud exclusively, it can be difficult to leverage with systems that need to work in different clouds or on-premises. As a result, some companies are starting to rethink their cloud strategy by designing for on-premises first, and modify only the necessary components to burst when needed in the cloud. This generally means that the components need to work equally well in any environment, which requires leveraging Infrastructure as a Service (IaaS) or additional investments for PaaS applications, or both.  What’s the Problem? Although most companies can benefit from cloud computing, not all of them can clearly identify a business reason for doing so other than in very generic terms. I heard many companies claim “it’s cheaper”, or “it allows us to scale”, without any specific metric or clear strategy behind the adoption decision. Other companies have a very clear strategy behind cloud adoption and can precisely articulate business benefits, such as “we have a 500% increase in traffic twice a year, so we need to burst in the cloud to avoid doubling our network and server capacity”. Understanding the problem being solved through by adopting cloud computing can significantly help organizations determine the optimum path and timeline to adoption. Performance or Scalability? I stopped counting the number of times I heard “the cloud doesn’t scale; our database runs faster on a laptop”.  While performance and scalability are related concepts, they are nonetheless different in nature. Performance is a measure of response time under a given load (meaning with a specific number of users), while scalability is the performance curve over various loads. For example one system could see great performance with 100 users, but timeout with 1,000 users, in which case the system wouldn’t scale. However another system could have average performance with 100 users, but display the exact same performance with 1,000,000 users, in which case the system would scale. Understanding that cloud computing does not usually provide high performance, but instead provides the tools necessary to build a scalable system (usually using PaaS services such as queuing and data federation), is fundamental to proper cloud adoption. Uptime? Last but not least, you may want to read the Service Level Agreement of your cloud provider in detail if you haven’t done so. If you are expecting 99.99% uptime annually you may be in for a surprise. Depending on the component being used, there may be no associated SLA at all! Other components may be restarted at any time, or services may experience failover conditions weekly ( or more) based on current overall conditions of the cloud service provider, most of which are outside of your control. As a result, for PaaS cloud environments (and to a certain extent some IaaS systems), applications need to assume failure and gracefully retry to be successful in the cloud in order to provide service continuity to end users. About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting (http://www.bluesyntax.net). Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" and “PRO SQL Server 2012 Practices” from Apress, a PluralSight author, and runs the Azure Florida Association.

    Read the article

  • College Ratings via the Federal Government

    - by user9147039
    A few weeks back you might remember news about a higher education rating system proposal from the Obama administration. As I've discussed previously, political and stakeholder pressures to improve outcomes and increase transparency are stronger than ever before. The executive branch proposal is intended to make progress in this area. Quoting from the proposal itself, "The ratings will be based upon such measures as: Access, such as percentage of students receiving Pell grants; Affordability, such as average tuition, scholarships, and loan debt; and Outcomes, such as graduation and transfer rates, graduate earnings, and advanced degrees of college graduates.” This is going to be quite complex, to say the least. Most notably, higher ed is not monolithic. From community and other 2-year colleges, to small private 4-year, to professional schools, to large public research institutions…the many walks of higher ed life are, well, many. Designing a ratings system that doesn't wind up with lots of unintended consequences and collateral damage will be difficult. At best you would end up potentially tarnishing the reputation of certain institutions that were actually performing well against the metrics and outcome measures that make sense in their "context" of education. At worst you could spend a lot of time and resources designing a system that would lose credibility with its "customers". A lot of institutions I work with already have in place systems like the one described above. They are tracking completion rates, completion timeframes, transfers to other institutions, job placement, and salary information. As I talk to these institutions there are several constants worth noting: • Deciding on which metrics to measure is complicated. While employment and salary data are relatively easy to track, qualitative measures are more difficult. How do you quantify the benefit to someone who studies in one field that may not compensate him or her as well as another field but that provides huge personal fulfillment and reward is a difficult measure to quantify? • The data is available but the systems to transform the data into actual information that can be used in meaningful ways are not. Too often in higher ed information is siloed. As such, much of the data that need to be a part of a comprehensive system sit in multiple organizations, oftentimes outside the reach of core IT. • Politics and culture are big barriers. One of the areas that my team and I spend a lot of time talking about with higher ed institutions all over the world is the imperative to optimize for student success. This, like the tracking of the students’ achievement after graduation, requires a level or organizational capacity that does not currently exist. The primary barrier is the culture of "data islands" in higher ed, and the need for leadership to drive out the divisions between departments, schools, colleges, etc. and institute academy-wide analytics and data stewardship initiatives that will enable student success. • Data quality is a very big issue. So many disparate systems exist (some on premise, some "in the cloud") that keep data about "persons" using different means to identify them. Establishing a single source of truth about an individual and his or her data is difficult without some type of data quality policy and tools. Good tools actually exist but are seldom leveraged. Don't misunderstand - I think it's a great idea to drive additional transparency and accountability into the system of higher education. And not just at home, but globally. Students and parents need access to key data to make informed, responsible choices. The tools exist to not only enable this kind of information to be shared but to capture the very metrics stakeholders care most about and in a way that makes sense in the context of a given institution's "place" in the overall higher ed panoply.

    Read the article

  • Are these non-standard applications of rendering practical in games?

    - by maul
    I've recently got into 3D and I came up with a few different "tricky" rendering techniques. Unfortunately I don't have the time to work on this myself, but I'd like to know if these are known methods and if they can be used in practice. Hybrid rendering Now I know that ray-tracing is still not fast enough for real-time rendering, at least on home computers. I also know that hybrid rendering (a combination of rasterization and ray-tracing) is a well known theory. However I had the following idea: one could separate a scene into "important" and "not important" objects. First you render the "not important" objects using traditional rasterization. In this pass you also render the "important" objects using a special shader that simply marks these parts on the image using a special color, or some stencil/depth buffer trickery. Then in the second pass you read back the results of the first pass and start ray tracing, but only from the pixels that were marked by the "important" object's shader. This would allow you to only ray-trace exactly what you need to. Could this be fast enough for real-time effects? Rendered physics I'm specifically talking about bullet physics - intersection of a very small object (point/bullet) that travels across a straight line with other, relatively slow-moving, fairly constant objects. More specifically: hit detection. My idea is that you could render the scene from the point of view of the gun (or the bullet). Every object in the scene would draw a different color. You only need to render a 1x1 pixel window - the center of the screen (again, from the gun's point of view). Then you simply check that central pixel and the color tells you what you hit. This is pixel-perfect hit detection based on the graphical representation of objects, which is not common in games. Afaik traditional OpenGL "picking" is a similar method. This could be extended in a few ways: For larger (non-bullet) objects you render a larger portion of the screen. If you put a special-colored plane in the middle of the scene (exactly where the bullet will be after the current frame) you get a method that works as the traditional slow-moving iterative physics test as well. You could simulate objects that the bullet can pass through (with decreased velocity) using alpha blending or some similar trick. So are these techniques in use anywhere, and/or are they practical at all?

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • No access to Samba shares

    - by koanhead
    I have three shared folders in my local home directory- that is to say, on my Ubuntu desktop's /home/me/. All were set up using "Sharing Options" in Nautilus' right-click menu. The standard "Music" and "Videos" folders are configured identically: the "Guest Access" box is checked, but the "Allow others to create and delete" is not. The third folder, called "shared", is configured to not allow Guest access but to allow others to modify files. I have not altered /etc/samba/smb.conf by hand, I have only used Sharing Options to create and modify these so-called "shares". My roommates have two Windows 7 computers and one Ubuntu Netbook Remix netbook. I have the aforementioned desktop machine and laptop running 10.04. None of these machines can access any of the shares. Attempts to access the Guest shares result in the message \\machine\directory is not accessible. The network name could not be found. This is the error message generated by a VM running Windows 2000. The other Windows machines generate a similar error. The Ubuntu laptop gives the error Unable to mount location: Failed to mount Windows share. Hurrah, once again, for informative error messages. That really helps a lot. When attempting to browse the folder called "shared" from the laptop, I'm confronted with a password dialog. This behavior is the same will all machines I've tried in the situation. On entering my username and password for the account to which the shares belong, the password dialog briefly disappears and is replaced with an identical dialog. No error message, useful or not, appears. When attempting to browse this folder with the VM, the outcome is the same except that the password dialog helpfully states "incorrect username or password". My assumption is that the username and password in question is that of the user which owns the shares. I have tried all other username and password combinations available in this context and the outcome is the same. I would like to be able to share files. Sharing them with Windows machines is a nice feature, or would be if it was available. Really I consider sharing files between two machines with the same version of the same operating system kind of a minimum condition for network usability. Samba last functioned reliably for me more than ten years ago. I have attempted to use it on and off since then with only intermittent success. Oh, and "Personal File Sharing" from the Preferences menu does not result in an entry in Places → Network → my-server. In fact, the old entry "MY-SERVER" goes away and is replaced by "koanhead's public files on my-server", which when I attempt to open it from the laptop gives a "DBus.Error.NoReply: Message did not receive a reply." I know I come here and gripe about Ubuntu a lot, but on the other hand I spend literally hours every day trying to fix things in Ubuntu. It's a good system which aspires to greatness, which is why things like this either Need to work; or Be adequately documented. Ideally both would be the case. Anyway, rant over. Hopefully someone will have some insight on this issue. Thanks all who bother to read this wall o'text for your time.

    Read the article

  • reference list for non-IT driven algorithmic patterns

    - by Quicker
    I am looking for a reference list for non-IT driven algorithmic patterns (which still can be helped with IT implementations of IT). An Example List would be: name; short desc; reference Travelling Salesman; find the shortest possible route on a multiple target path; http://en.wikipedia.org/wiki/Travelling_salesman_problem Ressource Disposition (aka Regulation); Distribute a limited/exceeding input on a given number output receivers based on distribution rules; http://database-programmer.blogspot.de/2010/12/critical-analysis-of-algorithm-sproc.html If there is no such list, but you instantly think of something specific, please 'put it on the desk'. Maybe I can compile something out of the input I get here (actually I am very frustrated as I did not find any such list via research by myself). Details on Scoping: I found it very hard to formulate what I want in a way everything is out that I do not need (which may be the issue why I did not find anything at google). There is a database centric definition for what I am looking for in the section 'Processes' of the second example link. That somehow fits, but the database focus sort of drifts away from the pattern thinking, which I have in mind. So here are my own thoughts around what's in and what's out: I am NOT looking for a foundational algo ref list, which is implemented as basis for any programming language. Eg. the php reference describes substr and strlen. That implements algos, but is not what I am looking for. the problem the algo does address would even exist, if there were no computers (or other IT components) the main focus of the algo is NOT to help other algo's chances are high, that there are implementions of the solution or any workaround without any IT support out there in the world however the algo could be benefitialy implemented/fully supported by a software application = means: the problem of the algo has to be addressed anyway, but running an algo implementation with software automates the process (that is why I posted it on stackoverflow and not somewhere else) typically such algo implementations have more than one input field value and more than one output field value - which implies it could not be implemented as simple function (which is fixed to produce not more than one output value) in a normalized data model often times such algo implementation outputs span accross multiple rows (sometimes multiple tables), whereby the number of output rows depends on the input paraters and rows in the table(s) at start time - which implies that any algo implementation/procedure must interact with a database (read and/or write) I am mainly looking for patterns, not for specific implementations. Example: The Travelling Salesman assumes any coordinates, however it does not say: You need a table targets with fields x and y. - however sometimes descriptions are focussed on examples with specific implementations very much - no worries, as long as the pattern gets clear

    Read the article

  • Are there any good Java/JVM libraries for my Expression Tree architecture?

    - by Snuggy
    My team and I are developing an enterprise-level application and I have devised an architecture for it that's best described as an "Expression Tree". The basic idea is that the leaf nodes of the tree are very simple expressions (perhaps simple values or strings). Nodes closer to the trunk will get more and more complex, taking the simpler nodes as their inputs and returning more complex results for their parents. Looking at it the other way, the application performs some task, and for this it creates a root expression. The root expression divides its input into smaller units and creates child expressions, which when evaluated it can use to build it's own result. The subdividing process continues until the simplest leaf nodes. There are two very important aspects of this architecture: It must be possible to manipulate nodes of the tree after it is built. The nodes may be given new input values to work with and any change in result for that node needs to be propagated back up the tree to the root node. The application must make best use of available processors and ultimately be scalable to other computers in a grid or in the cloud. Nodes in the tree will often be updating concurrently and notifying other interested nodes in the tree when they get a new value. Unfortunately, I'm not at liberty to discuss my actual application, but to aid understanding a little bit, you might imagine a kind of spreadsheet application being implemented with a similar architecture, where changes to cells in the table are propagated all over the place to other cells that need the result. The spreadsheet could get so massive that applying multi-core multi-computer distributed system to solve it would be of benefit. I've got my prototype "Expression Engine" working nicely on a single multi-core PC but I've started to run into a few concurrency issues (as expected because I haven't been taking too much care so far) so it's now time to start thinking about migrating the Engine to a more robust library, and that leads to a number of related questions: Is there any precedent for my "Expression Tree" architecture that I could research? What programming concepts should I consider. I realise this approach has many similarities to a functional programming style, and I'm already aware of the concepts of using futures and actors. Are there any others? Are there any languages or libraries that I should study? This question is inspired by my accidental discovery of Scala and the Akka library (which has good support for Actors, Futures, Distributed workloads etc.) and I'm wondering if there is anything else I should be looking at as well?

    Read the article

  • SSI: Failed String Comparison with CGI Environment Variable [migrated]

    - by Calyo Delphi
    I am currently working on developing a personal website. It's not my first time doing this, but this is my first major foray into implementing SSI. I've run myself into a wall, however, with an if-else directive that uses one of the CGI environment variables as part of its comparison. Even after some limited attempts at debugging, all of the output and documentation that I have means that the comparisons being made should fail outright. This is not the case, and the wrong evaluation is being made by the if-else directive. Here's the code in the file index.shtml: <head> <!--#set var="page" value="Home" --> <!--#include file="headlinks.shtml" --> <style> img#ref { float: right; margin-left: 8px; border-width: 0px; } </style> </head> Here's the code in the file headlinks.shtml: <title><!--#echo var="page" --> &ndash; <!--#echo var="HTTP_HOST" --></title> <!--#set var="docroot" value="${DOCUMENT_ROOT}" --> <!--#echo var="docroot" --> <!--#if expr="( $docroot != '/Applications/MAMP/htdocs' ) || ( $docroot != '/home/dragarch/public_html' )" --> <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> <!--#else --> <link rel="stylesheet" type="text/css" href="style.css"> <link rel="shortcut icon" type="image/svg+xml" href="favicon.svg" /> <!--#endif --> And here's the output for the file index.shtml: <title>Home &ndash; dragarch</title> /Applications/MAMP/htdocs <link rel="stylesheet" type="text/css" href="../style.css"> <link rel="shortcut icon" type="image/svg+xml" href="../favicon.svg" /> Both style.css and favicon.svg are in the document root with index.shtml, so the if directive should fail and default to the output of the else directive. As you can see, while the document root (which is currently the MAMP htdocs folder on my own notebook) is correct according to the output of the echo directive, the comparison in the if-else directive fails to compare the strings properly. I'm using this page for my documentation: http://httpd.apache.org/docs/2.2/mod/mod_include.html I'm at a complete loss as to why this is the case, and need a bit of help here. EDIT: I should note that dragarch is a hostname that I configured in /etc/hosts to point to 127.0.0.1 so I could test the site without having to use localhost. It has no real effect on the functionality of anything, other than to just act as a prettier hostname to use.

    Read the article

  • How do you feel about being asked to code during an interview?

    - by Mystere Man
    I have seen a lot of comments about good interview questions and puzzles to require potential developers to solve during the interview process. I have personally had several interviews in which the interviewer has asked me to write some piece of code or solve a problem during the interview, and I have always performed very poorly in these "tests". The reason is simple, as a developer who spends my days talking to computers, I find I have to prepare myself and "switch gears" to be in "interview mode". I prepare myself to make a good impression. When I'm programming, I'm very focused and am totally different from when I'm being "interpersonal". I just can't get into "the zone" when I'm also having to be a charming and witty potential employee. I feel that by asking a developer to prove his skills during an interview, all you're doing is finding out if they can code under pressure, and at the drop of a hat. It has almost no ability to determine how you would perform in a "real life" development situation. Maybe, if you're looking for someone that can code and chat at the same time, i can see how that would be beneficial. But I think you overlook potential candidates that simply do not perform well in such an artificial environment. While I appreciate that a potential employer wants to see what I can do, I don't think an interview is the place for such a test. I mean, suppose a job for an over the road trucker required that you drive while being interviewed. How does that really end well? So I'm curious as to what others think about such situations. Have you failed interviews because you were not in the right frame of mind? Have you failed to make a good interpersonal impression because you were too distracted trying to solve the problem? If you're a hiring manager, or someone that gives interviews, do you even think about such things? Is it really important that someone perform well in an interview? EDIT: To clarify, I'm not against testing applicants. My concern is about testing during an interview. See also: What are the pros and cons for the employer of code questions during an interview? looking at this from the interviewer's point of view.

    Read the article

  • No access to Samba shares

    - by koanhead
    I have three shared folders in my local home directory- that is to say, on my Ubuntu desktop's /home/me/. All were set up using "Sharing Options" in Nautilus' right-click menu. The standard "Music" and "Videos" folders are configured identically: the "Guest Access" box is checked, but the "Allow others to create and delete" is not. The third folder, called "shared", is configured to not allow Guest access but to allow others to modify files. I have not altered /etc/samba/smb.conf by hand, I have only used Sharing Options to create and modify these so-called "shares". My roommates have two Windows 7 computers and one Ubuntu Netbook Remix netbook. I have the aforementioned desktop machine and laptop running 10.04. None of these machines can access any of the shares. Attempts to access the Guest shares result in the message \\machine\directory is not accessible. The network name could not be found. This is the error message generated by a VM running Windows 2000. The other Windows machines generate a similar error. The Ubuntu laptop gives the error Unable to mount location: Failed to mount Windows share. Hurrah, once again, for informative error messages. That really helps a lot. When attempting to browse the folder called "shared" from the laptop, I'm confronted with a password dialog. This behavior is the same will all machines I've tried in the situation. On entering my username and password for the account to which the shares belong, the password dialog briefly disappears and is replaced with an identical dialog. No error message, useful or not, appears. When attempting to browse this folder with the VM, the outcome is the same except that the password dialog helpfully states "incorrect username or password". My assumption is that the username and password in question is that of the user which owns the shares. I have tried all other username and password combinations available in this context and the outcome is the same. I would like to be able to share files. Sharing them with Windows machines is a nice feature, or would be if it was available. Really I consider sharing files between two machines with the same version of the same operating system kind of a minimum condition for network usability. Samba last functioned reliably for me more than ten years ago. I have attempted to use it on and off since then with only intermittent success. Oh, and "Personal File Sharing" from the Preferences menu does not result in an entry in Places → Network → my-server. In fact, the old entry "MY-SERVER" goes away and is replaced by "koanhead's public files on my-server", which when I attempt to open it from the laptop gives a "DBus.Error.NoReply: Message did not receive a reply." I know I come here and gripe about Ubuntu a lot, but on the other hand I spend literally hours every day trying to fix things in Ubuntu. It's a good system which aspires to greatness, which is why things like this either Need to work; or Be adequately documented. Ideally both would be the case. Anyway, rant over. Hopefully someone will have some insight on this issue. Thanks all who bother to read this wall o'text for your time.

    Read the article

  • Building Enterprise Smartphone App &ndash; Part 1: Why Build Smart Phone Apps

    - by Tim Murphy
    This is part 1 in a series of post based on a talk I gave recently at the Chicago Information Technology Architects Group.  Feel free to leave feedback. Intro Most of us already carry smartphones. We play games on them. We keep up with what is going on with our friends and our favorite teams. We take pictures of our kids at their events. But the question is if that is all they are good for. Many companies have aspects of their business that lend themselves to being performed by mobile devices. Some of them lean toward larger device such as tablets, but many can be executed on smartphones. This and the following articles will discuss some of the possible applications of smartphone technology for businesses, the platforms that are available and the considerations you need to make when building them. I'll take a look at some specific scenarios and wrap up with a couple of capabilities that are just emerging that can be used in the future. Why Build Enterprise Smartphone Applications So what are some of the ways that you can leverage smartphone technology to gain efficiency in your business or a clients business. There are a few major areas that I have seen mobile platforms being an advantage to. Your mobile sales force is a key candidate for leveraging smartphone apps.  They can visit clients in their retail location and place orders on site. It is a more personal approach which can gain you customer loyalty.  A sales person may also gather information about the way a client does business or who their target market is. This allows them you to focus marketing information or build customized support for your customer. You may also have need to track physical inventory in a store. This is something that has historically been done with laser scanners, but with the camera capabilities in today's phones and tablets it is possible to use more general multi-purpose devices.  This can save costs on both hardware and telecommunication contracts. Delivery verification is another area that historically has been the domain of specialized devices but can now be accomplished with smartphones.  This also reduces costs because it is also used for communicating with the driver and other operations.  Add to that the navigation capability of smartphones and you can see how the return on investment increases. Executives are always on the go. They spend most of their time in meetings and yet they need access to decision making information at their finger tips. With a smartphone app they can get alerts when major sales are closed or critical accounting process are completed that may need their attention. They can also answer questions by instantly pulling up BI reports. I have often heard operations support people say that they need things like VPN and RDP from their phones. If they can also have notifications of outages or critical support requests they can be react to situations without needing to be tied to their desks. These are all valid reasons to need smartphone applications.  In the next installment I will discuss platforms and features. del.icio.us Tags: Smartphones,Enterprise Smartphone Apps,Architecture

    Read the article

  • IPv6, isn't it just a few extra bits?

    - by rclewis
    It's always an interesting task, to try and explain what you do to family and friends. I have described IPv6 as the "Next Generation Internet"  or "Second Internet" but the hollow expressions on my kids faces scream for the instant relief of the latest video game.  Never one to give up easily, I have formulated a new example - the Post Office... Similar to the Post Office the Internet delivers mail and packages based on addresses. As the number of residences, businesses, and delivery locations increased, the 5 digit ZIP Code (Washington, DC 20005) was expanded to ZIP+4  allowing for more precise delivery points (Postmaster General, Washington, DC 20260-3100). Ah, if only computers were as simple.  IPv6 isn't an add-on or expansion of the existing IPv4 Addressing, it is a new addressing model which will allow the internet to grow from a single computer in the basement of a university or your parents kitchen table, to support the multitude of smart phones, smart TV's, tablets, dvr's, and disk players, all clambering to connect for information. Unfortunetly there are only a finite number of IPv4 public addresses left, and those are being consumed at an ever increasing rate. Few people could have predicted the explosive growth of the internet or the shortage of IPv4 addresses we now face - but there is a "Plan B" and that is the vastly larger address space of IPv6.  Many in the industry have labeled this a "business continuity" problem,  when in fact most companies will be able to continue conducting business once they run out of existing IPv4 Addresses. The problem is really a Customer Continuity problem, how will businesses communicate with existing customers and reach new customers online who's only option is to adopt IPv6 when IPv4 is depleted? Perhaps a first step is publishing a blog that is also accessible via IPv6, it's just a few extra bits. Join us for the Oracle OpenWorld 2012 Session:   Navigating IPv6 @ Oracle Thursday, Oct. 4th 2:15PM - 3:15PM  Palace Hotel - Concert   Learn more about IPv6 Technologies at Oracle

    Read the article

  • database independent coding framework options?

    - by statirasystems
    Background: I have not programmed in a while besides doing VBA and a little VB.NET. So please forgive my language use. I'm green and have a head cold. I am reading all I can now, but I have no programming circles to draw from. The information I am providing is to help guide you to what I am looking for. I am not confident I can ask the question properly. Story: I have four different projects that I am starting. Obviously I won't be working on all at the same time however they each will have similar needs and be inter related. They are as follows: Desktop Environment/System User Interface - basically a product that runs on major computers via mono or .net that unifies the look and functions. In the context of the up coming question it would be able to directly access data of various types. It would work in tandum with my office suite, system manager, and network application framework. Office Suite - technically it would not be a suite since I will be doing it from one interfacel except for the Communications Application. As far as the question, it will need to be able to link to various data sources for storing files and using, manipulating, and presenting information. System Manager - an intellegent system to manage and administer the entire network and all equipment. As far as the question, needs to be able to access data for archiving and and for accessing it's own settings stored in various formats, sql or xml. Network Application Framework - A complete system that can be used for ERP, CRM, CMS, Errata, File Management, and so on. As to the question to be able to access it's own or interlink with existing applications. Requirement: C#, Simplifies and reduces coding, use the same code to access diffent databases(ie MySQL, MS SQL, ACCESS, XML, ...), Mono would be nice but not a must, Question: What librarys, frameworks, or other options would be able to help with this? Is there a good resource to guide me? I don't want arguing over what is best, just information to help me further understand and make an educated decision.

    Read the article

  • Task scheduler "hidden" only hides task, not process

    - by Brandi
    I am trying to make an application that acts like a desktop application for all the computers in our network. I have already got a windows forms app that works like I want it to, and I'm using the task scheduler to start it on login. We would really like it if the process as well as the task is hidden from the task manager in order to avoid accidental deletion. Selecting "Hidden" in the task scheduler hides the task (good!) but the process is still visible (not good enough). I tried using the option to run as "SYSTEM" or "LOCAL SERVICE" so that the user would get "access denied" when trying to delete or just wouldn't even view it by default. However, running as a service makes the process invisible on Vista and 7, and the point of my app is to display information interactively. (User can click, sort, etc). Is there any other alternatives to either run the process as someone/something besides the logged in user and still have the logged on user be able to see and interact with it? (Therefore it would list as someone else's app?) From what I've read on the internet, the only ways to actually hide something from the task manager seem hacky and/or difficult and rather involved. I don't really want to write a bunch of C or whatever only to maybe not have it work on Vista/7 anyway. Besides which, for a legitimate app with a legitimate use, I shouldn't have to go to those extremes... I see "Access Denied" all the time for system processes... why is it so hard for me to do the same? So does anyone have any simple solutions? Is it easier than I think to just list something in the task manager as another user? Thanks in advance for any replies.

    Read the article

  • Can't join OS X Mavericks to AD Domain

    - by watkipet
    I'm attempting to join an OS X Mavericks (10.9) client to a Windows Server 2008 Active Directory domain, however the bind fails with this error in the OS X client's system.log: Oct 24 15:03:15 host.domain.com com.apple.preferences.users.remoteservice[5547]: -[ODCAddServerSheetController handleOtherActionError: gotError: Error Domain=com.apple.OpenDirectory Code=5202 "Authentication server encountered an error while attempting the requested operation." UserInfo=0x7f9e6cb3e180 {NSLocalizedDescription=Authentication server encountered an error while attempting the requested operation., NSLocalizedFailureReason=Authentication server encountered an error while attempting the requested operation.}, Authentication server encountered an error while attempting the requested operation. I've joined (bound) Ubuntu Linux clients to the same domain with net ads join in the past with no problems (using the same administrative user). I don't have access to any server logs. Here's the GUI error (from Directory Utility) on the OS X client: Here's the GUI error (from User's and Groups) in System Preferences on the OS X client: Update After some Wiresharking I've got some more info: OS X Client - KDC (over UDP): AS_REQ (no padata) OS X Client <- KDC (over UDP): KRB5KDC_ERR_PREAUTH_REQUIRED OS X Client - KDC (over UDP): AS_REQ (this time with PA-ENC-TIMESTAMP in padata) OS X Client <- KDC (over UDP): KRB5KDC_ERR_RESPONSE_TOO_BIG OS X Client - KDC (over TCP): AS_REQ (also with PA-ENC-TIMESTAMP in padata) OS X Client <- KDC (over TCP): KDC_ERR_ETYPE_NOSUPP ...and that's it. This is what I think is going on: The OS X client sends a kerberos request. The KDC says, "You need to pre-authenticate. Try again" The OS X client tries to pre-authenticate (all this so far is over UDP) Something gets lost on our network and the KDC says, "Oops something went wrong" The OS X client switches to TCP and tries again. Over TCP, the KDC says, "You're using an encryption type I don't support" Note that in its padata records, the OS X client is always using "aes256-cts-hmac-sha1-96" as its encryption type. However, in its KDC_REQ_BODY record it lists the aes256-cts-hmac-sha1-96, aes128-cts-hmac-sha1-96, des3-cbc-sha1, and rc4-hmac encryption types. When the KDC comes back with KDC_ERR_ETYPE_NOSUPP, it uses rc4-hmac as its encryption type in its padata record. I know next to nothing about Kerberos, but it seems to me that the OS X client should go ahead and try the rc4-hmac encryption type. However, it does nothing after this. Update 2 Here's the debug log from Directory Services on the OS X client. Sorry--it's long. 2013-10-25 14:19:13.219128 PDT - 10544.20463 - ODNodeCustomCall request, NodeID: 52A65FAE-4B24-455D-86EC-2199A780D234, Code: 80 2013-10-25 14:19:13.220409 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - client requested OU - 'CN=Computers,DC=domain,DC=com' 2013-10-25 14:19:13.220427 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Binding using '[email protected]' for kerberos ID 2013-10-25 14:19:13.220571 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - new kerberos credential cache 'MEMORY:0x7fa713635470' for '[email protected]' 2013-10-25 14:19:13.220623 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: loop 1 2013-10-25 14:19:13.220639 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send 0 patypes 2013-10-25 14:19:13.220653 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - fast disabled, not doing any fast wrapping 2013-10-25 14:19:13.220699 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 0 2013-10-25 14:19:13.221275 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.221326 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.221373 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.222588 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.222617 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00000001 2013-10-25 14:19:13.222665 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 1 packets 1 wc: 0.001960 nr: 0.000000 kh: 0.000560 tid: 00000001 2013-10-25 14:19:13.222705 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: loop 2 2013-10-25 14:19:13.222737 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: processing input 2013-10-25 14:19:13.222752 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: got an KRB-ERROR from KDC 2013-10-25 14:19:13.222775 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: KRB-ERROR -1765328359/Additional pre-authentication required 2013-10-25 14:19:13.222791 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send 4 patypes 2013-10-25 14:19:13.222800 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 19 2013-10-25 14:19:13.222808 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 2 2013-10-25 14:19:13.222816 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 16 2013-10-25 14:19:13.222825 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - KDC send PA-DATA type: 15 2013-10-25 14:19:13.222840 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using ENC-TS with enctype 18 2013-10-25 14:19:13.222850 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using default_s2k_func 2013-10-25 14:19:13.227443 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - fast disabled, not doing any fast wrapping 2013-10-25 14:19:13.227502 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 0 2013-10-25 14:19:13.228233 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.228320 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.228374 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.229930 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.229957 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: udp 192.168.0.1:kerberos (192.168.0.1) tid: 00010001 2013-10-25 14:19:13.229975 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto trying over again (reset): 0 2013-10-25 14:19:13.230023 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 2 2013-10-25 14:19:13.230664 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.230726 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.230818 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to 11: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.231101 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.232743 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.232777 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00010002 2013-10-25 14:19:13.232798 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 2 packets 2 wc: 0.005316 nr: 0.000000 kh: 0.001339 tid: 00010002 2013-10-25 14:19:13.232856 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: loop 3 2013-10-25 14:19:13.232868 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: processing input 2013-10-25 14:19:13.232900 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using keyproc 2013-10-25 14:19:13.232910 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: using default_s2k_func 2013-10-25 14:19:13.236487 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: extracting ticket 2013-10-25 14:19:13.236557 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_init_creds: wc: 0.015944 2013-10-25 14:19:13.237022 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 2 2013-10-25 14:19:13.237444 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.237482 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.237551 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to 11: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.237900 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.238616 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.238645 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00020001 2013-10-25 14:19:13.238674 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 1 packets 1 wc: 0.001656 nr: 0.000000 kh: 0.000409 tid: 00020001 2013-10-25 14:19:13.238839 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - Trying to find service kdc for realm DOMAIN.COM flags 2 2013-10-25 14:19:13.239302 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - submissing new requests to new host 2013-10-25 14:19:13.239360 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to host: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.239429 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - connecting to 11: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.239683 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - writing packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.240350 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - reading packet: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.240387 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - host completed: tcp 192.168.0.1:kerberos (192.168.0.1) tid: 00030001 2013-10-25 14:19:13.240415 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_sendto_context DOMAIN.COM done: 0 hosts 1 packets 1 wc: 0.001578 nr: 0.000000 kh: 0.000445 tid: 00030001 2013-10-25 14:19:13.240514 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - krb5_credential - krb5_get_credentials_with_flags: DOMAIN.COM wc: 0.003615 2013-10-25 14:19:13.240537 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - valid credentials for [email protected] 2013-10-25 14:19:13.240541 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching to cache 'MEMORY:0x7fa713635470' 2013-10-25 14:19:13.240545 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching GSS to cache 'MEMORY:0x7fa713635470 2013-10-25 14:19:13.240555 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Bind Step 5 - Bind/Join computer to domain - 'domain.com' 2013-10-25 14:19:13.241345 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - resolving 'server.domain.com' 2013-10-25 14:19:13.241646 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - added socket 12 for host 'server.domain.com:389' address '192.168.0.2' to kqueue list 2013-10-25 14:19:13.241930 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Setting kerberos server for 'Kerberos:DOMAIN.COM' to 'server.domain.com' 2013-10-25 14:19:13.241962 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching to cache 'MEMORY:0x7fa713635470' 2013-10-25 14:19:13.241969 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching GSS to cache 'MEMORY:0x7fa713635470 2013-10-25 14:19:13.242231 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Confidentiality 2013-10-25 14:19:13.242234 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - setting realm 'DOMAIN.COM' for node '/Active Directory/domain.com' 2013-10-25 14:19:13.242239 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Integrity (signing) 2013-10-25 14:19:13.242274 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using hostname 'server.domain.com' 2013-10-25 14:19:13.242282 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using initiator credential '[email protected]' 2013-10-25 14:19:13.250771 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Authenticate to LDAP using Kerberos credential - 0 2013-10-25 14:19:13.250784 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - verified connectivity to '192.168.0.2' with socket 12 2013-10-25 14:19:13.251513 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - locating site using domain domain.com using CLDAP 2013-10-25 14:19:13.252145 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - using site of 'DOMAINGROUP' from CLDAP 2013-10-25 14:19:13.253626 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - resolving 'server2.domain.com' 2013-10-25 14:19:13.253933 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - added socket 13 for host 'server2.domain.com:389' address '192.168.0.1' to kqueue list 2013-10-25 14:19:13.254428 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Setting kerberos server for 'Kerberos:DOMAIN.COM' to 'server2.domain.com' 2013-10-25 14:19:13.254462 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching to cache 'MEMORY:0x7fa713635470' 2013-10-25 14:19:13.254468 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - switching GSS to cache 'MEMORY:0x7fa713635470 2013-10-25 14:19:13.254617 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - setting realm 'DOMAIN.COM' for node '/Active Directory/domain.com' 2013-10-25 14:19:13.254661 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Confidentiality 2013-10-25 14:19:13.254670 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI allow Integrity (signing) 2013-10-25 14:19:13.254689 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using hostname 'server2.domain.com' 2013-10-25 14:19:13.254695 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - GSSAPI using initiator credential '[email protected]' 2013-10-25 14:19:13.262092 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Authenticate to LDAP using Kerberos credential - 0 2013-10-25 14:19:13.262108 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - verified connectivity to '192.168.0.1' with socket 13 2013-10-25 14:19:13.262982 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Computer account either already exists or DC is already Read/Write 2013-10-25 14:19:13.264968 PDT - 10544.20463, Node: /Active Directory, Module: ActiveDirectory - Adding record 'cn=spike,CN=Computers,DC=domain,DC=com' in 'domain.com' The failure point seems to be Computer account either already exists or DC is already Read/Write, however, I can search for 'spike' on the Active Directory server using Active Directory Explorer and it's not there. If I do the same search for the Linux and Windows PCs I added previously, I can find them.

    Read the article

  • Install a web certificate on an Android device

    - by martani_net
    To gain access to WIFI at university I have to login with my user/pass credentials. The certificate of their website (the local home page that asks for the credentials) is not recognized as a trusted certificate, so we install it separately on our computers. The problem is that I don't take my laptop with me often to university, so I usually want to connect using my HTC Magic, but I have no clue on how to install the certificate separately on Android, it is always rejected. [Edit2] : this is what is stated in their website Need for installation of official certificates CyberTrust validated by the CRU (http://www.cru.fr/wiki/scs/) The certificates contain information certified to generate encryption keys for data exchange, called "sensitive" as the password of a user. By connecting to CanalIP-UPMC, for example, the user must validate the identity of the server accepting the certificate appears on the screen in a "popup window". In reality, the user is unable to validate a certificate knowing, because a simple visual check of the license is impossible. Therefore, the certificates of the certification authority (CRU-Cybertrust Educationnal-ca.ca Cybertrust and-global-root-ca.ca) must be installed prior to the browser for the validity of the certificate server can be controlled automatically. Before you connect to the network-UPMC CanalIP you must register in your browser through the certification authority Cybertrust-Educationnal-ca.ca Download the Cybertrust-Educationnal-ca.ca, depending on your browser and select the link below : With Internet Explorer, click on the link following. With Firefox, click on the link following. With Safari, click the link following. If this procedure is not respected, a real risk is incurred by the user: that of being robbed password LDAP directory UPMC. A malicious server may in fact try very easily attack type "man-in-the-middle" by posing as the legitimate server at UPMC. The theft of a password allows the attacker to steal an identity for transactions over the Internet can engage the responsibility of the user trapped ... This is their website : http://www.canalip.upmc.fr/doc/Default.htm (in French, Google-translate it :)) Anyone knows how to install a web certificate on Android?

    Read the article

  • Windows 2003 Domain Controller Very Upset about NIC Teaming

    - by Kyle Brandt
    I set up BACS (Broadcom Teaming) to team two NIC on a Windows 2003 Active Directory Domain Controller. Networking still works okay, I can ping the gateway etc, but both DNS and Active Directory fail to start with various 40xx errors. The team that I created is Smart load Balancing with Failover, with one backup and only one in smart load balancing (So really it is just failover). I have the team the same IP address that the single active NIC had before. Anyone seen this before, or have any ideas what the problem might be? Event Type: Error Event Source: DNS Event Category: None Event ID: 4015 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server has encountered a critical error from the Active Directory. Check that the Active Directory is functioning properly. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: DNS Event Category: None Event ID: 4004 Date: 3/7/2010 Time: 10:33:03 AM User: N/A Computer: ADC Description: The DNS server was unable to complete directory service enumeration of zone .. This DNS server is configured to use information obtained from Active Directory for this zone and is unable to load the zone without it. Check that the Active Directory is functioning properly and repeat enumeration of the zone. The extended error debug information (which may be empty) is "". The event data contains the error. Event Type: Error Event Source: NTDS Replication Event Category: DS RPC Client Event ID: 2087 Date: 3/7/2010 Time: 10:40:28 AM User: NT AUTHORITY\ANONYMOUS LOGON Computer: ADC Description: Active Directory could not resolve the following DNS host name of the source domain controller to an IP address. This error prevents additions, deletions and changes in Active Directory from replicating between one or more domain controllers in the forest. Security groups, group policy, users and computers and their passwords will be inconsistent between domain controllers until this error is resolved, potentially affecting logon authentication and access to network resources.

    Read the article

  • Ideal Bacula appliance?

    - by Ricket
    I'm an intern at a small company and we (the IT department of two) manage <100 client computers and a handful of servers. Currently we're using a company's appliance to handle backup; it does a small backup every night and a full backup every weekend, and a guy comes on Wednesday to take an offsite backup drive (and gives back last week's drive to swap with it). Lately this system, mainly the appliance, has been having problems, so we are looking for an alternative. I'm researching other companies but also looking into what we might expect from trying to do this ourselves. There will undoubtedly be a large learning curve, but hey, that's what serverfault is for, right? :) So anyway I was looking at Bacula. Feature list sounds great, documentation is plentiful, but it's only software. So my question is, what is the ideal backup server to run the Bacula server software on? And not only the server but other related appliances. Our current backup appliance uses only hard drives, not tape drives. It has several plugged into it at one time, in hotswap bays on the front of the machine. I couldn't help but notice though, it's hardly more than Windows XP with hard drive bays, a PCI eSATA card (which connects to another appliance extension piece with 2 more bays), and their software. Since the company will take back their appliance if/when we cancel with them, where can I go to configure a server with these kinds of things? Maybe I'm being naive, I'm sure Dell (and any other computer company) sells them in the small business section of their website, but I wanted to make sure that there's not some other more recommended place that other companies are getting their hardware from, and that I don't need anything special for Bacula.

    Read the article

  • "No more threads can be created in the system" in Network and Sharing Center

    - by Zell Faze
    A while back I noticed on one of our laboratory computers (Windows 7, very little extra software installed) that the network connection icon in the system tray would claim that it had no network connection, even though it did. This issue would go away after the computer was rebooted, but would surface again the next time I looked at the computer (a few days later). Upon opening the Network and Sharing Center I am shown an actual error message, but not one that seems to give me a lot of information about what the problem is. In the place of the usual information about network adapters and whether you are connected to the Internet it simply says: "No more threads can be created in the system." The Event Viewer shows hundreds of events from different services also with the same message. "Volume Shadow Copy Service error: Unexpected error calling routine CoCreateInstance. hr = 0x800700a4, No more threads can be created in the system."; "The WinHTTP Web Proxy Auto-Discovery Service service failed to start due to the following error: A thread could not be created for the service."; "The IP Helper service terminated with the following error: No more threads can be created in the system." As far as I can tell, this message seems to mean that there is some sort of resource leak in Windows where something is creating a large number of threads and those threads are not being killed off? I've tried restarting WMI and several services related to networking, without avail. Can anyone provide more information on what "No more threads can be created in the system" might mean and what I might be able to do to fix the issue? Currently the only solution appears to be restarting.

    Read the article

  • The boot selection failed because a required device is inaccessible 0xc000000e

    - by bbodenmiller
    A family member of mine recently went on vacation and turned off their computer, something they normally do not do, upon returning home it would not turn on and now returns the error message below. Generally friends and family come to me for help with computers and I have no problem, however this time I am a bit stumped. Any suggestions would be greatly appreciated. As you can see the error message is: Status: 0xc000000e Info: The best selection failed because a required device is inaccessible. Before going to this error message it briefly flashes the Windows loading screen. I have been able to confirm through the Windows RE Command Line and the dir command that the C: drive is accessible and likely is just suffering a bootup issue. I have tried: Launching the repair process discussed in the error message three times however each time it requires a restart and then returns to the same error message. Changing the boot order to be hard drive first Getting into safe mode; F8 just results in the same error message before I can get to the menu to select safe mode I have checked to make sure the BCD (bcdedit, Boot Configuration Data) is still intact as per https://www.symantec.com/business/support/index?page=content&id=TECH160475 I plan to try (but would like additional comments on): sfc /scannow; requires a restart and thus will likely result in the error message again A memory scan Bootrec as per http://support.microsoft.com/kb/927392#method1 Swapping IDE cables/ports Resetting the BIOS I noticed others with similar issues around the web are dual-booting however this machine is not setup in a dual-boot environment. Additionally at one point this error message supposedly showed up before I started working on the computer: The instruction at 0xfbe2584d referenced memory at 0x00000008. The memory could not be read. As previously stated any additional suggestions or words of advice would be greatly apprecaited.

    Read the article

  • Getting SMB file shares working over a PPTP VPN

    - by Ben Scott
    I'm having issues getting SMB file shares working over a PPTP VPN. The server setup consists of a security device (DrayTek V3300) which passes the PPTP authentication to a SBS2003 server running RRAS. The server is the DC and provides DNS and WINS, the single NIC's name server is set to the NIC's IP (192.168...), and DHCP on the DrayTek sets the server IP as the DNS. If I create a new VPN connection in Win7, leaving everything as default apart from the server, username, password and domain, I can: ping everything by IP address resolve IPs with nslookup using their fully-qualified name, as in nslookup fileserver.mydomain.local ping machines by fully-qualified name, as in ping fileserver.mydomain.local However if I try to access a file share: within Explorer, I get "Windows cannot access ..." with "Error code: 0x80004005 Unspecified Error", using net use z: \\fileserver.mydomain.local\share, I get "System error 53 has occurred. The network path was not found." If I add the machine name to my HOSTS file I can use the file share, which is my last-ditch workaround, but I have a number of VPN users and would rather a solution that doesn't involve me trying to hand-edit system files on computers half a country away. If I set the WINS server explicitly in the connection's IPv4 settings I don't have to use the FQN to ping the machine, but that doesn't change anything else. EDIT: The PC I'm having the issue on is running Win 7 Home Premium. After more testing I actually have two other PCs that work, one W7HP, one XP Home, and another Vista PC that doesn't work (not tested as much as the others), all four on the same internet connection (behind the same router). All of them were tested with a straight-forward, all defaults, new VPN configuration.

    Read the article

  • Debian Apache2 and SSL

    - by Topher Fangio
    Hello all, I recently took over a server that is using Apache2 with SSL. I have setup a new server to which I am migrating all of the old websites so that we can more easily scale (it's a cloud server) and so that I can set everything up correctly (or at least with some sort of convention). I have read quite a few articles on setting up Apache2 and SSL with virtual hosts, but I'm a bit confused because all of the examples show three files and I only seem to have two. To compound the problem, they are all named differently (do the file extensions actually make a difference?). The examples show something to this effect: <VirtualHost X.X.X.X:443> ServerAlias something.mydomain.com ServerAdmin [email protected] DocumentRoot /var/www/project/client/site SSLEngine on SSLCertificateFile /etc/ssl/certs/mydomain-cert.pem SSLCertificateKeyFile /etc/ssl/private/mydomain-key.pem SSLCertificateChainFile /etc/ssl/certs/mydomain-ca.crt </VirtualHost> However, the files I have are: _.mydomain.com.crt gd_bundle.crt It is a wildcard certificate that we purchased through GoDaddy I believe. I believe that the first file is the actual certificate file and the gd_bundle.crt is the chain file, but that leaves me without a key file. There is also a random mydomain.csr file lying around on the old server, but it wasn't one of the files bundled with the download from GoDaddy, so I'm not really sure as to what it is. Any help in figuring out what I need to do would be greatly appreciated. I am software developer, so I know my way around computers, but I have only dabbled in server setup/maintenance. Much Thanks!

    Read the article

  • Issues with doing a P2V on Exchange 2007

    - by PokermUNKEE
    I shutdown all Exchange services on the physical box and did a convert. Everything converted fine. Server booted up no issues and I could receive email from the outside world with no issues. But I am unable to send emails internally or externally. They just sit in the Outbox. I also disabled the DHCPClient service as I didn't want the server to get an IP address from DHCP when it first came online. It came online with no IP, then I used console to assign the correct static IP address (same one on the physical server) and rebooted. I ran the Exchange Troubleshooting Agent and it gave me the following errors: The value for the '\MSExchangeIS Mailbox\Messages Queued For Submission' counter on server exchange is greater than zero (average value is 8.8) and it appears that 'MSExchangeMailSubmission' is failing to submit messages to at least one computer with the Hub Transport server role installed over the last minute. Found 1 computers with the Hub Transport server role installed in the same Active Directory site as server exchange using local Active Directory site GUID 'ce8a4367-1bf6-4825-9cac-c4e2b115c450'. Check Local Active Directory Site Hub Transport Server Role Health (Mailflow_CheckLocalBhHealth1.1.1.1.1.1.1.1) I have one Exchange server with all the roles on it. Has anyone see this before when doing a convert? Please help :(

    Read the article

  • Remote Desktop Connection Only Works One Way

    - by advocate
    I can't get my desktop to connect to my laptop through remote desktop connection. Unfortunately I can only get my laptop to connect to my desktop (quite useless). Desktop: Windows 7 Ultimate 64 Bit SP1 Windows firewall is off for all 3 profiles (domain / private / public) Remote desktop connection is installed and set to allow all connections Under running services is: Running Remote Desktop Configuration Running Remote Desktop Services Running Remote Desktop Services UserMode Port Redirector Running Remote Procedure Call (RPC) Stopped Remote Access Auto Connection Manager Stopped Remote Access Connection Manager Stopped Remote Procedure Call (RPC) Locator Stopped Remote Registry Stopped Routing and Remote Access Stopped Windows Remote Management (WS-Management) Laptop: Windows 7 Home Premium 64 Bit SP1 Windows firewall is off for all3 profiles (domain / private / public) Remote desktop connection is installed and set to 'Allow Remote Assistance connections to this computer' Under running services is: Running Remote Procedure Call (RPC) Stopped Remote Access Auto Connection Manager Stopped Remote Access Connection Manager Stopped Remote Desktop Configuration Stopped Remote Desktop Services Stopped Remote Procedure Call (RPC) Locator Stopped Remote Registry Stopped Routing and Remote Access Stopped Windows Remote Management (WS-Management) It should be noted that the Laptop that I'm trying to connect to is an Alienware and might be running some wonky Dell settings. Also, the settings are slightly different for remote desktop connection as it's a Home edition of Windows and not Ultimate like my desktop. Finally, both computers are on the same Homegroup so that RDC can be accessed by one click through the network section of Windows. They're also on the same workgroup, MSHOME, just to see if that helps.

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >