Search Results

Search found 1331 results on 54 pages for 'serious'.

Page 35/54 | < Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >

  • UK Oracle User Group Event: Trends in Identity Management

    - by B Shashikumar
    As threat levels rise and new technologies such as cloud and mobile computing gain widespread acceptance, security is occupying more and more mindshare among IT executives. To help prepare for the rapidly changing security landscape, the Oracle UK User Group community and our partners at Enline/SENA have put together an User Group event in London on Apr 19 where you can learn more from your industry peers about upcoming trends in identity management. Here are some of the key trends in identity management and security that we predicted at the beginning of last year and look how they have turned out so far. You have to admit that we have a pretty good track record when it comes to forecasting trends in identity management and security. Threat levels will grow—and there will be more serious breaches:   We have since witnessed breaches of high value targets like RSA and Epsilon. Most organizations have not done enough to protect against insider threats. Organizations need to look for security solutions to stop user access to applications based on real-time patterns of fraud and for situations in which employees change roles or employment status within a company. Cloud computing will continue to grow—and require new security solutions: Cloud computing has since exploded into a dominant secular trend in the industry. Cloud computing continues to present many opportunities like low upfront costs, rapid deployment etc. But Cloud computing also increases policy fragmentation and reduces visibility and control. So organizations require solutions that bridge the security gap between the enterprise and cloud applications to reduce fragmentation and increase control. Mobile devices will challenge traditional security solutions: Since that time, we have witnessed proliferation of mobile devices—combined with increasing numbers of employees bringing their own devices to work (BYOD) — these trends continue to dissolve the traditional boundaries of the enterprise. This in turn, requires a holistic approach within an organization that combines strong authentication and fraud protection, externalization of entitlements, and centralized management across multiple applications—and open standards to make all that possible.  Security platforms will continue to converge: As organizations move increasingly toward vendor consolidation, security solutions are also evolving. Next-generation identity management platforms have best-of-breed features, and must also remain open and flexible to remain viable. As a result, developers need products such as the Oracle Access Management Suite in order to efficiently and reliably build identity and access management into applications—without requiring security experts. Organizations will increasingly pursue "business-centric compliance.": Privacy and security regulations have continued to increase. So businesses are increasingly look for solutions that combine strong security and compliance management tools with business ready experience for faster, lower-cost implementations.  If you'd like to hear more about the top trends in identity management and learn how to empower yourself, then join us for the Oracle UK User Group on Thu Apr 19 in London where Oracle and Enline/SENA product experts will come together to share security trends, best practices, and solutions for your business. Register Here.

    Read the article

  • Professional immigration

    - by etranger
    Hello all, Does anyone here have a practical advice on professional relocation from Russia to Europe? The reasons behind making such a decision are far beyond the subject, perhaps, so I'll stick to the practical part. Having done some of the "common stuff" for finding a job, I am now facing two serious problems: I am a "dual-class" person, with university degree in marketing, and multiple years of self-studied computer competence (hence my writing here). Have professional experience in both areas. I don't currently hold a European work permit. From what I can see, this results in normal HR person throwing out my CV as either being "overqualified" or "too much trouble with making the permit". I do have the skills and character to start my own business, but it requires start-up capital that I don't have, over the last years I had to pay high bills for medical treatment of my family member, who had deceased. Now, I'm almost out of debts. As you can probably guess, English is not a problem, and I'm open to new languages, but first steps of entering the market, or the society, is the problematic part. I live close to Norway, and am trying to get some professional contacts there, but it hasn't got me any practical perspective so far. Any advice is greatly appreciated. EDIT: I am currently making my living off web site development, and occasional consulting services both in IT and marketing. For purely geographic reasons I'm dealing with clients that reside in the same city where I live, pop. 350 000. Being quite local, market requirements for web sites are simple and stable — clients need to control navigation, write articles in a word-like editor, upload illustrations and place ad banners, all with no additional programming. As many web developers do, I'm using my own content management system that fits these expectations. I have also started developing a newer version of this system that has better support for international environments, but I'm too distant from the real market demand in Europe to speak of the right track here. Technically it's based on php/mysql and uses xslt for templating. It allows for quick website deployment, and has architectural neatness, lack of which made me abandon similar opensource solutions (Joomla and the like). Deploying time from rasterized design proofs is normally under 6-8 working hours, don't know how that compares to the world practice. EDIT 2: Can anyone share what Norwegian (Scandinavian) web solutions market currently demands?

    Read the article

  • My father wants to learn PHP-MySQL to port his application. What I should do to help?

    - by adijiwa
    My father is a doctor/physician. About 15 years ago he started writing an application to handle his patient's medical records in his clinic at home. The app has the ability to input patient's medical records (obviously), search patients by some criteria, manage medicine stocks, output receipt to printer, and some more CRUDs. He wrote it in dBase III+. A few years later he migrated to FoxPro 2.6 for DOS and finally in a few more years he rewrote his app in Visual FoxPro 9. And now (actually two years ago) he wants to rewrite it in PHP, but he don't know how. The Visual FoxPro version of this app is still running and has no serious problem except sometimes it performs slowly. Usually there are 1-5 concurrent users. The binary and database files are shared via windows share. He did all the coding as a hobby and for free (it is for his own clinic after all). He also use this app in two other offices he managed. Some reasons of why he wants to rewrite in PHP-MySQL: He wants to learn Easier to deploy (?) Easier client setup, need only a browser What should I do to help my father? How should he start? I explored some options: I let my father learn PHP and MySQL (and HTML (and JavaScript?)) from scratch. I create/bundle framework. I'm thinking on bundling CodeIgniter and a web UI framework (any suggestion?) especially to reduce effort on writing presentation codes. What do you think? tl;dr My father (a doctor) wants to rewrite his Visual FoxPro app in PHP-MySQL. He knows very little of PHP and MySQL but he wants to learn. What should I do to help? How should he start? Some facts: My father is 50 years old. His first encounter with a PC is in early 1980s. It was IBM PC with Intel 8088. He knows BASIC. He taught me how to use DOS and how to program with BASIC. The other language he knows fairly well is dBase/FoxPro. I got my bachelor CS degree last year. I know the internals of my father's app because sometime he wants me to help him writing his app. Sorry for my english.

    Read the article

  • Oracle OpenWorld / JavaOne Where I'll Be

    - by Shay Shmeltzer
    It's that time of the year again when San Francisco get flooded with Oracle and Java geeks for the annual OpenWorld and JavaOne conferences. Here are some of the places where you'll be able to find me: Sunday has a bunch of great ADF content in the ADF Enterprise Methodology Group track - I'm not sure if I'll make it there but I'm sure those who will will get some serious knowledge transfer. I'm starting Monday at the Keynote for Developers (10:45 in Salon 8 at the Marriott) - that's a great place for ADF developers to start the official week with an overview of what's new and upcoming in the world of development with ADF. While I'm not presenting this session - Chris Tonas who leads the development tools org will -  a demo that I built will be shown. So I'll be sitting in the audience crossing my fingers praying for the demo gods (and the wifi connection to work). My presentation part of the week starts on Monday at 12:15 at Moscone South room 306 where I'll be presenting "CON3004 - Understanding Oracle ADF and Its Role in Oracle Fusion"  . A basic introduction to ADF, it's architecture, development experience and how it integrates and works with the rest of the Fusion Middleware components.  After the session between 2-4 I'll be at the JDeveloper demo booth in Moscone South to answer any questions people might have. Then at 6:15 together with Grant we'll host BOF4492 - How to Get Started with Oracle ADF where we'll try and explain some of the learning paths and resources that are available for people who want to start learning ADF. This is a birds-of-a-feather so we'll also love to hear ideas from the audience about what paths they took and what things work or need improvment. Tuesday is relatively a quite day for me with a shift at the Oracle ADF Essentials pod at JavaOne from 1:30-3:30. There are several very good ADF architecture and best practices sessions on that day - so I'll try and hit those. Wednesday starts with another shift at the JDeveloper booth at JavaOne. Then at 4:30, instead of doing what all the ADF developers should do and heading over to the ADF meetup at the OTN Lounge, I'll be heading over to JavaOne for my CON3770 - Oracle JDeveloper and Oracle ADF: What’s New session. It's been a couple of years since the last time JDeveloper or ADF got any airtime at JavaOne - so it will be a great opportunity to show those in the Java community with open minds our approach to Java development. Now that ADF Essentials offers a free way to develop with ADF on GlassFish, I hope we'll be getting more people from the core Java camp interested in what we have to offer. Thursday is another relaxed day for me - who knows maybe I'll even be able to catch a session or two on that day. If you want to learn more about the ADF related sessions at OOW check out our full list here.

    Read the article

  • Is OOP hard because it is not natural?

    - by zvrba
    One can often hear that OOP naturally corresponds to the way people think about the world. But I would strongly disagree with this statement: We (or at least I) conceptualize the world in terms of relationships between things we encounter, but the focus of OOP is designing individual classes and their hierarchies. Note that, in everyday life, relationships and actions exist mostly between objects that would have been instances of unrelated classes in OOP. Examples of such relationships are: "my screen is on top of the table"; "I (a human being) am sitting on a chair"; "a car is on the road"; "I am typing on the keyboard"; "the coffee machine boils water", "the text is shown in the terminal window." We think in terms of bivalent (sometimes trivalent, as, for example in, "I gave you flowers") verbs where the verb is the action (relation) that operates on two objects to produce some result/action. The focus is on action, and the two (or three) [grammatical] objects have equal importance. Contrast that with OOP where you first have to find one object (noun) and tell it to perform some action on another object. The way of thinking is shifted from actions/verbs operating on nouns to nouns operating on nouns -- it is as if everything is being said in passive or reflexive voice, e.g., "the text is being shown by the terminal window". Or maybe "the text draws itself on the terminal window". Not only is the focus shifted to nouns, but one of the nouns (let's call it grammatical subject) is given higher "importance" than the other (grammatical object). Thus one must decide whether one will say terminalWindow.show(someText) or someText.show(terminalWindow). But why burden people with such trivial decisions with no operational consequences when one really means show(terminalWindow, someText)? [Consequences are operationally insignificant -- in both cases the text is shown on the terminal window -- but can be very serious in the design of class hierarchies and a "wrong" choice can lead to convoluted and hard to maintain code.] I would therefore argue that the mainstream way of doing OOP (class-based, single-dispatch) is hard because it IS UNNATURAL and does not correspond to how humans think about the world. Generic methods from CLOS are closer to my way of thinking, but, alas, this is not widespread approach. Given these problems, how/why did it happen that the currently mainstream way of doing OOP became so popular? And what, if anything, can be done to dethrone it?

    Read the article

  • Employer admits that its developers are underpaid and undervalued. Time to part ways?

    - by Psionic
    My employer recently posted an opening for a C# Developer with 3-5 years of experience. The requirements and expectations for the position were fair, up until the criteria for salary determination. It was stated clearly that compensation would depend ONLY on experience with C#, and that years of programming experience with other languages & frameworks would be considered irrelevant and not factored in. I brought up my concern with HR that good candidates would see this as a red flag and steer away. I attempted to explain that software development is about much more than specific languages, and that paying someone for their experience in a single language is a very shortsighted approach to hiring good developers (I'm telling this to the HR dept of a software company). The response: "We are tired of wasting time interviewing developers who expect 'big salaries' because they have lots of additional programming experience in languages other than what we require." The #1 issue here is that 'big salaries' = Market Rate. After some serious discussion, they essentially admitted that nobody at the company is paid near market rate for their skills, and there's nothing that can be done about it. The C-suite has the mentality that employees should only be paid for skills proven over years under their watch. Entry-level developers are picked up for less than $38K and may reach 50K after 3 years, which I'm assuming is around what they plan on offering candidates for the C# position. Another interesting discovery (not as relevant) - people 'promoted' to higher responsibilities do not get raises. The 'promotion' is considered an adjustment of the individuals' roles to better suit their 'strengths', which is what they're already being paid for. After hearing these hard truths straight from HR, I would assume that most people who are looking out for themselves would quickly begin searching for a new employer that has a better idea of what they're doing in the industry (this company fails in many other ways, but I don't want to write a book). Here is my dilemma however: This is the first official software development position I've held, for barely 1 year now. My previous position of 3 years was with a very small company where I performed many duties, among them software development (not in my official job description, but I tried very hard to make it so). I've identified local openings that I'm currently qualified for, most paying at least 50% more than I'm getting now. Question is, is it too soon for a jump? I am getting valuable experience in my current position, with no shortage of exciting projects. The work environment is very comfortable, and I'm told by many that I'm in the spotlight of the C-level guys for the stuff that I've been able to accomplish during my short time (for what that's worth). However, there is a clear opportunity cost to staying, knowing now with certainty that I will have to wait 3-5 years only to be capped at what I could potentially be earning elsewhere this year. I am also aware that 'job hopper' is a dangerous label to have, regardless of the reasons.

    Read the article

  • How to chain actions/animations together and delay their execution?

    - by codinghands
    I'm trying to build a simple game with a number of screens - 'TitleScreen', 'LoadingScreen', 'PlayScreen', 'HighScoreScreen' - each of which has it's own draw & update logic methods, sprites, other useful fields, etc. This works well for me as a game dev beginner, and it runs. However, on my 'PlayScreen' I want to run some animations before the player gets control - dropping in some artwork, playing some sound effects, generally prettifying things a little. However, I'm not sure what the best way to chain animations / sound effects / other timed general events is. I could make an intermediary screen, 'PrePlayScreen', which simply has all of this hardcoded like so: Update(){ Animation anim1 = new Animation(.....); Animation anim2 = new Animation(.....); anim1.Run(); if(anim1.State == AnimationState.Complete) anim2.Run(); if(anim2.State == AnimationState.Complete) // Load 'PlayScreen' screen } But this doesn't seem so great - surely their must be a better way? I then thought, 'Hey - an AnimationManager! That'd be awesome!'. But then that creeping OOP panic set in as I thought about it some more. If I create the Animation in my Screen, then add it to the AnimationManager (which may or may not be a GameComponent hooked up to Update/Draw), how can I get 'back' to it? To signal commands like start / end / repeat? I'd still need to keep a reference to the object in my Screen so that I could still communicate with it once it's buried in the bosom of a List in my AnimationManager. This seems bad. I've also tried using events - call 'Update' on all the animations in the PlayScreen update loop, but crucially all of the animations have a bool flag ('Active') which determines whether they should begin. The first animation has this set to 'true', all others 'false'. On completion the first animation raises an event, which sets animation 2's bool flag to true (and so it then runs). Once animation 2 is complete another 'anim complete' event is raised, and the screen state changes. Considering the game I'm making is basically as simple as it gets I know I'm overthinking this... it's just the paradigm shift from web - game development is making me break out in a serious case of the stupids.

    Read the article

  • Reasons NOT to use JSF [closed]

    - by Vain Fellowman
    I am new to StackExchange, but I figured you would be able to help me. We're crating a new Java Enterprise application, replacing an legacy JSP solution. Due to many many changes, the UI and parts of the business logic will completely be rethought and reimplemented. Our first thought was JSF, as it is the standard in Java EE. At first I had a good impression. But now I am trying to implement a functional prototype, and have some really serious concerns about using it. First of all, it creates the worst, most cluttered invalid pseudo-HTML/CSS/JS mix I've ever seen. It violates every single rule I learned in web-development. Furthermore it throws together, what never should be so tightly coupled: Layout, Design, Logic and Communication with the server. I don't see how I would be able to extend this output comfortably, whether styling with CSS, adding UI candy (like configurable hot-keys, drag-and-drop widgets) or whatever. Secondly, it is way too complicated. Its complexity is outstanding. If you ask me, it's a poor abstraction of basic web technologies, crippled and useless in the end. What benefits do I have? None, if you think about. Hundreds of components? I see ten-thousands of HTML/CSS snippets, ten-thousands of JavaScript snippets and thousands of jQuery plug-ins in addition. It solves really many problems - we wouldn't have if we wouldn't use JSF. Or the front-controller pattern at all. And Lastly, I think we will have to start over in, say 2 years. I don't see how I can implement all of our first GUI mock-up (Besides; we have no JSF Expert in our team). Maybe we could hack it together somehow. And then there will be more. I'm sure we could hack our hack. But at some point, we'll be stuck. Due to everything above the service tier is in control of JSF. And we will have to start over. My suggestion would be to implement a REST api, using JAX-RS. Then create a HTML5/Javascript client with client side MVC. (or some flavor of MVC..) By the way; we will need the REST api anyway, as we are developing a partial Android front-end, too. I doubt, that JSF is the best solution nowadays. As the Internet is evolving, I really don't see why we should use this 'rake'. Now, what are pros/cons? How can I emphasize my point to not use JSF? What are strong points to use JSF over my suggestion?

    Read the article

  • High CPU usage by 'svchost.exe' and 'coreServiceShell.exe'

    - by kush.impetus
    I am having a laptop running on Windows 7 Ultimate 32-bit. Since past few days, my laptop is facing a serious problem. Whenever I connect to Internet, either svchost.exe or coreServiceShell.exe or both hog the CPU. The coreServiceShell.exe consumes a lot of RAM also. Going into the details, I found that high CPU usage of svchost.exe is caused by Network Location Awareness service. And the high CPU usage of coreServiceShell.exe is caused by Trend Micro Titanium Internet Security 2012. That kind'a makes me think that Trend Micro may be the root of the problem. After further testing, I found that if I use IE or Firefox to browse the Internet, immediately after connecting to Internet, things are normal. See and But if I use Google Chrome, the coreServiceShell.exe hogs both CPU and RAM. At this point, if I disconnect the Internet, the CPU and RAM usage by coreServiceShell.exe continues to be high till I close the Chrome. Also, when I close the Chrome, while Internet is connected, svchost.exe continues to hog CPU but coreServiceShell.exe leaves the race. That makes think that Chrome is the root of the problem, but again, tracing coreServiceShell.exe takes me back to Trend Micro Internet Security. Stopping the Protection by the Trend Micro Internet Security doesn't help either (I am not able to stop its services though). I have updated the Chrome, but no help. I just can't figure out who is the culprit. I can't do without the Google Chrome (of course, by not using it) because of its immensely useful and indispensable features both during browsing and development. Secondly, I can't uninstall the Trend Micro Internet security Suite since it still has few months before it expires and is proving me reliable protection. What could be the cause of the problem and what can I do to resolve this? Thanks in advance

    Read the article

  • Port forwarding on Fortigate 50B

    - by sindre j
    I have serious problems setting up port forwarding on a Fortigate 50B. The unit is basically running as factory default, the wan1 interface is connected to my fibre optic internet modem, and my lan is connected to the internal switch of the Fortigate. The factory default firewall policy allowing traffic from the internal interface to wan1 is kept and I'm able to access the interet as normal. Then I added a virtual ip and a firewall policy for allowing access from the internet to my local servers (ip 192.168.9.51) webserver (standard port 80). The settings I made are as follows. Edit Virtual IP Mapping Name : Server VIP External interface : wan1 Type : Static NAT Extermal IP Address/Range : 0.0.0.0 Mapped IP Address/Range : 192.168.9.51 Port Forwading : not checked Firewall policy Source interface/Zone : wan1 Source address : all Destination interface/Zone : internal Destination address : Server VIP Schedule : always Service : HTTP Action : ACCEPT no other settings checked What happens now is that I'm unable to access internet from my server, I'm not getting through to the webserver from internet either. I'm able to ping a site on the outside, but all web traffic is blocked, both ways. I've checked the documentation, but as far as I can tell I have set this up correctly. Anyone here with knowledge of Fortigate port forwading/NAT?

    Read the article

  • Why am I getting Network error: 403 Forbidden in firebug for files I am not trying to access?

    - by moomoochoo
    QUESTIONs I'd like to know why I am getting Network error: 403 Forbidden in firebug for files that I am not trying to access? is it likely to cause any serious problems on the webserver? how to fix it. Why is my browser trying to access those files in the error message? DETAILS I’m using wampserver 2.2 to access a folder via the browser. The browser is on the same computer as the server. The computer is running windows 7 ultimate. When I view a web folder via my browser hXXp://localhost/folder I can see the folder contents ok but in firebug I get Network error: 403 Forbidden I’m not deliberately trying to access those files in the error msgs. You will notice they are in a completely different folder to the one I am looking at. I check the apache_error.log and see [Wed Sep 26 00:05:10 2012] [error] [client 127.0.0.1] client denied by server configuration: C:/apache2, referer: hxxp://localhost/folder/ Wampserver 2.2 is installed on D drive. I took a look at the httpd.conf file but I couldn't find any references to c: When I look in Apache’s access.log I see 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/blank.gif HTTP/1.1" 403 217 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/back.gif HTTP/1.1" 403 216 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/text.gif HTTP/1.1" 403 216 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/unknown.gif HTTP/1.1" 403 219 127.0.0.1 - - [26/Sep/2012:00:05:10 +0900] "GET /icons/folder.gif HTTP/1.1" 403 218 CONFIGURATION Wampserver 2.2 installed on Drive D Apache 2.2.22 PHP 5.4.3 MySQL 5.5.24 Firebug 1.10.3 Firefox 15.0.1

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server ------------------------------ Cannot connect to MSSQLSERVER. ------------------------------ ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 ------------------------------ BUTTONS: OK ------------------------------ I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • certutil -ping fails with 30 seconds timeout - what to do?

    - by mark
    The certificate store on my Win7 box is constantly hanging. Observe: C:\1.cmd C:\certutil -? | findstr /i ping -ping -- Ping Active Directory Certificate Services Request interface -pingadmin -- Ping Active Directory Certificate Services Admin interface C:\set PROMPT=$P($t)$G C:\(13:04:28.57)certutil -ping CertUtil: -ping command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:04:58.68)certutil -pingadmin CertUtil: -pingadmin command FAILED: 0x80070002 (WIN32: 2) CertUtil: The system cannot find the file specified. C:\(13:05:28.79)set PROMPT=$P$G C:\ Explanations: The first command shows you that there are –ping and –pingadmin parameters to certutil Trying any ping parameter fails with 30 seconds timeout (the current time is seen in the prompt) This is a serious problem. It screws all the secure communication in my app. If anyone knows how this can be fixed - please share. Thanks. P.S. 1.cmd is simply a batch of these commands: certutil -? | findstr /i ping set PROMPT=$P($t)$G certutil -ping certutil -pingadmin set PROMPT=$P$G EDIT1 I have succeeded to pin down the single windows API that causes the problem - DsGetDcName According to the windbg, the certutil -ping invokes it like so: PDOMAIN_CONTROLLER_INFO pdci; DWORD ret = ::DsGetDcName(NULL, NULL, NULL, NULL, DS_DIRECTORY_SERVICE_PREFERRED, &pdci); On my workstation it times out for 30 seconds and then returns error code 1355, which is ERROR_NO_SUCH_DOMAIN No domain controller is available for the specified domain or the domain does not exist. On another machine, which is accidentally a windows server 2003, it returns almost immediately with the correct domain controller name inside the returned DOMAIN_CONTROLLER_INFO structure. Now the question is what is missing on my workstation for that API to find the correct domain controller?

    Read the article

  • Could not continue scan with NOLOCK due to data movement during installation

    - by dbdev1
    Hi, I am running Windows Server 2008 Standard Edition R2 x64 and I installed SQL Server 2008 Developer Edition. All of the preliminary checks run fine (Apart from a warning about Windows Firewall and opening ports which is unrelated to this and shouldn't be an issue - I can open those ports). Half way through the actual installation, I get a popup with this error: Could not continue scan with NOLOCK due to data movement. The installation still runs to completion when I press ok. However, at the end, it states that the following services "failed": database engine services sql server replication full-text search reporting services How do I know if this actually means that anything from my installation (which is on a clean Windows Server setup - nothing else on there, no previous SQL Servers, no upgrades, etc) is missing? I know from my programming experience that locks are for concurrency control and the Microsoft help on this issue points to changing my query's lock/transactions in a certain way to fix the issue. But I am not touching any queries? Also, now that I have installed the app, when I login, I keep getting this message: TITLE: Connect to Server Cannot connect to MSSQLSERVER. ADDITIONAL INFORMATION: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 67) For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft+SQL+Server&EvtSrc=MSSQLServer&EvtID=67&LinkId=20476 BUTTONS: OK I went into the Configuration Manager and enabled named pipes and restarted the service (this is something I have done before as this message is common and not serious). I have disabled Windows Firewall temporarily. I have checked the instance name against the error logs. Please advise on both of these errors. I think these two errors are related. Thanks

    Read the article

  • How to stop Excel 2003 from loading a Gazillion files

    - by Gary M. Mugford
    One of my soon-to-be-ex-friends got an Excel file from another friend of his and decided to click on it. It started opening all kinds of files from within Excel. Over 200 and still counting when he called me. I told him to go to task manager, which showed a LOT of files in the applications tab, but only one Excel.exe in the processes tab. Closing it down there, closed down Excel. I then CrossLooped in to see if I could give him a helping hand. Each time Excel was re-opened, the mass influx of files started. They were all kinds of files, PDFs, Docs, JPGs, even some spreadsheets. It looked like the end of solitaire, with multiple windows opening (XP) and the counter on the lone Excel button on the task bar counting off the files. I did the task manager exit routine and went looking for temp files. I CrapCleaned out the system. Made sure I went through the files created in the last hour and deleted anything with a temp anywhere in it. I also deleted the crappy infected/corrupted file from it's place on the desktop (yeah, I know, I yelled for 15 minutes on THAT subject). Despite a delousing, the restart of Excel, which complained of a deactivated add-in, would start the cascading windows, whether I answered yes or no to that question. Yes, it knew it had a serious crash, but why would it just keep on trying to load the bad file, even when I got rid of it? But here's the real question. WHERE was it loading from? I went through the backup folder and NOTHING was there! So what's the process for starting Excel WITHOUT it trying to do a crash recovery? Sort of makes me feel stupid at times. Thanks for any light you can shed on this issue. GM

    Read the article

  • TomTom GO & Ubuntu Linux: impersonating a GPRS phone with dund

    - by Broam
    Background: I've called TomTom support, and they don't support Linux. I can get my GO 730 to mount Mass Storage, and I found a shell script that will allow me to install maps (haven't tried it; will update when I do.). As of note: USB 2.0 only. 1.1 ports will not work. However--I still can't update the TomTom or take advantage of any traffic services. The GO will connect to a mobile phone, but I don't have one that supports tethering. However, I've found a site that claims to know a way to get a Linux Machine to impersonate a phone advertising GPRS services and it apparently works in Fedora as old as FC4. I'm having some serious trouble getting this to work on Ubuntu 9.10 Karmic, mainly because I think some of the built-in bluetooth stuff is getting in the way. Changing the class bits in main.conf (hcid.conf does not exist) doesn't crash..., and dund starts and listens, but the TomTom device never seems to want to connect to my machine. I haven't played around much with sdcptool (I think that's the name, not in front of a Linux machine right now) but maybe I have to advertise the DUN profile...I'm not very sure. My Question: I have no way to diagnose the problems. What are some diagnostic tools I can use to help dig down and figure out what's going on? Update: apparently dund is a legacy tool that's going away. What replaces it?

    Read the article

  • Win 7 running slowly with low CPU usage and memory

    - by guywhoneedsahand
    I have a relatively new (under 2 yrs old) windows 7 machine. It has 9GB of RAM, and an i7 core CPU (930 @ 2.8GHz w/ 8 CPUs). After about 8 months since a clean install, I noticed my computer was running slowly. I figure it was fragmentation etc, and I did a complete wipe & clean reinstall. However, my problems are somehow persisting. The computer is running painfully slowly (but in leaps and bounds - sometimes it will work fine for 3 hrs, then suddenly freeze up just from clicking the start button). The 'freezes' happen randomly - not during any especially intensive computing. I initially thought something might be eating through my CPU and/or Memory, but Task Manager indicates that neither the CPU or Memory spike. In fact, even during serious lag, CPU usage remains at less than 5% and Memory at ~ 1.5GB. It's beyond me why a fresh install on a powerful machine is performing so poorly... and it certainly is frustrating! What could be causing the poor performance, and what can I do to fix it?

    Read the article

  • Openpgp does not work in my Thunderbird-Installation

    - by zerozero
    Hello community, as mentioned above - i encountered serious troubles. here are the versions of the related software: SuSE 11.2, Thunderbird v.3.1.6, released October 27, 2010, Firefox 3.6 v3.6.12 I created an installations with an own partition both for the user and for the TB-Mails. For a new installation on another hardware i took these partitions. When i wanted to read the emails, i got an error-message like this: The GPG-agent for your GnuPG-version 2.0.12 couldn´t get started Further i got an error message for the access on services of enigmail. The file jar:file:///usr/lib/mozilla/extensions/{3550f703-e582-4d05-9a08-453d09bdfdc6}/{847b3a00-7ab1-11d4-8f02-006008948af5}/chrome/enigmail.jar!/locale/de-DE/enigmail/help/initError.html couldn´t get found. I found out, that this path doesn´t come from TB nor from firefox, but from enigmail. I installed several (un-)pack-programs. the only effect was, that in the menu of TB the entry for OpenPGP appeared in the TB-Menu. The errors as described above repeat at every try to read an email. I deleted and re-installed enigmail, but the errors dont´t disappear. What can i do to get rid these error-messages ? Thanks in advance

    Read the article

  • Ati X1600 driver problem on Mac

    - by Mulot
    Hi all, I currently own a 06' Macbook pro 1.1, and since some months I have recurrent problems of displays bug or artifacts. I searched quickly around to see that a lot of other users on Mac (iMac or Macbook pro) also have the same problem due to a problem for the X1600 video card. Apparently it's due to overheating problem, in my case even without warming a lot I have very bad display bugs such as colorful pixel lines, or glitches, and freeze and crash, all of this on Tiger, Leopard and Snow Leopard. I found this interesting article here talking about this problem and trying to gather people so that Apple take the serious GPU problem in consideration. In one of the comments, an user said he removed all bundle named with "radeon" and then he had no more problems under Leopard, and seems ot work fine well too on Snow Leopard. I did the same thing, I removed the bundles of the driver, restart, and no more problems, but not more 3D acceleration, which is not an acceptable solution. For those interested, here is the list of files to be deleted to stop having this problem. /System/Library/Extensions/ATIRadeonX1000.kext /System/Library/Extensions/ATIRadeonX1000GA.plugin /System/Library/Extensions/ATIRadeonX1000GLDriver.bundle /System/Library/Extensions/ATIRadeonX1000VADriver.bundle /System/Library/Extensions/ATIRadeonX2000.kext /System/Library/Extensions/ATIRadeonX2000GA.plugin /System/Library/Extensions/ATIRadeonX2000GLDriver.bundle /System/Library/Extensions/ATIRadeonX2000VADriver.bundle I would like to know if there is a way to fix this using other drivers if that's possible or by creating a group to force Apple to make a replacement program in place. Edit : How to locate those files on your hard drive if you are not under Snow Tiger : sudo find / -iname "*radeon*"

    Read the article

  • Distributing a custom command line tool to enterprise servers

    - by Jeremy Baker
    I've been tasked with building a command line tool that we will be providing to our enterprise customers so that they can use the API to upload data to our platform. The API works with standard cURL requests, so I can do most of the basic functionality with simple bash scripting, although I would like to provide something that is solid and really makes it easy for them to use and I don't know what I don't know. It's been a good 8 years since I've really done any serious sysadmin work. Most of the good tools I use these days are written in Ruby or Python and have a standard distribution process (Gems, for example). However, I know rhel and other platforms have their own package managers. Finally, the question: In today's day and age, what language / distribution method should I consider in order to cover the widest range of platforms without having to build completely different versions for each platform? I'd also love any general feedback you have about building similar projects, or links to projects that you think do a good job of this now and have open source code that I could read. Thanks in advance!

    Read the article

  • Laptops with easy heat sink service?

    - by Niten
    Can you recommend a current laptop model with easy heat sink access – or better yet, a removable air intake filter – making it easy to periodically clean out the dust and lint that always packs up in these things? Every laptop I've owned has eventually overheated on account of a clogged heat sink. (I suppose it doesn't help that I have a cat who loves to hang out where I'm working, or that my laptop is almost always running.) One of the things I really love about my current system, a Dell Inspiron 1420n, is how easy it is to service its cooling system: whenever I notice the fan starting to work harder and the CPU temperature climbing higher than it should be, I merely have to unscrew a single panel from the bottom of the machine, clean out the heat sink, and then I'm good for another few months. Which current models of the "business laptop" variety offer similar easy cooling system service? I'm looking for something roughly along the lines of: 14- or 15-inch display Nehalem-based CPU Solid construction – magnesium chassis or better (like the Inspiron) TPM (for BitLocker) ideal, but not mandatory Docking adapter ideal, but not mandatory Good battery life For example, the ThinkPad T410 would have been my top choice, but it seems like it would be a serious chore to service its heat sink. For the current MacBook Pros it looks downright impossible. No matter how nice the laptop is in other respects, it'll be of no use to me when it's overheating. So, any suggestions? Thanks in advance... (I'm constantly surprised that customers and manufacturers don't pay more attention to this feature, at least in the business laptop subcategory. In the last couple months I've fixed two friends' laptops which were also overheating due to clogged cooling systems; clearly I'm not the only one affected by this.)

    Read the article

  • Migrating to Amazon AWS etc: What key statistics/questions should be analyzed and asked?

    - by cerd
    I searched SOverflow pretty extensively for something similar to this set of questions. BACKGROUND: We are a growing 'big(ish)' data chemical data company that are outgrowing our lab and our dedicated production workhorses. Make no mistake, we need to do some serious query optimization. Our data (It comes from a certain govt. agency so the schema and lack of indexing is atrocious). So yes, I know, AWS or EC2 is not a silver bullet in the face of spending time to maybe rework your queries/code entirely 'out of the box'. With that said I would appreciate any input on the following questions: We produce on CentOS and lab on Ubuntu LTS which I prefer especially with their growing cloud / AWS integration. If we are mysql centric, and our biggest problem is these big cartesian products that produce slow queries, should we roll out what we know after more optimization with respect to Ubuntu/mySQL with the added Amazon horsepower? Or is there some merit to the NoSQL and other technologies they offer? What are the key metrics I need to gather from apache and mysql other than like: Disk I/O operations, Data up/down avgs and trends and special high usage periods/scenarios? I've reviewed AWS/EC2 fine print, but want 2nd opinions. What other services aside from the basic web/database have proven valuable to you? I know nothing of Hadoop or many other technologies they offer, echoing my prev. question, do you sometimes find it worth it (Initially having it be a gamble aside from basic homework) to dive/break into a whole new environment and try to/or end up finding a way of more efficiently producing your data/site product? Anything I should watch out for in projecting costs, or any other general advice when working with AWS folks from anyone else where your company is very niche and very very technical (Scientifically - or anybody for that matter)? Thanks very much for your input - I think this thread could be valuable to others as well.

    Read the article

  • Apache+PHP on Windows Server 2008

    - by Álvaro G. Vicario
    I've installed Apache/2.2 and PHP/5.3 lots of times under Windows XP, Windows Vista and Windows Server 2003. The official *.msi installers work fine and configure everything. Now I need to install them into a Windows Server 2008 R2 Standard 64-bit box and I'm facing nothing but problems: There are no official 64 bit binaries for Apache and no binaries at all for PHP (official or third-party). It's alright, I'll do with good 32 bits, but it's kind of surprising. Official documentation is vague, generic and completely unaware of UAC or any recent Windows security feature. The PHP installer is unable to configure mod_php and the Apache installer is unable to configure... well, Apache. After three hours I've finally reached the point where I'm installing everything in the root folder and assigning full control access to all users in all files and directories and all I've got is a PHP-less Apache server that's able to serve static pages. So I guess it's time to stop and think. My question is: Has anyone installed an Apache+PHP production server under Windows Server 2008 in a serious, secure and reliable way and documented the whole process? Or should I just find a bundle like XAMPP and the like that requires no installation? === EDIT === I've installed Xampp Lite 1.7.3 and everything was working in 5 minutes. I'd still like to find some documentation about installing the original packages: XAMPP installs tons of stuff I don't need and offers no tool to enable and disable PHP extensions.

    Read the article

  • Squid, authentication, Outlook Anywhere, Windows 7 and HTTP 1.1 = NIGHTMARE

    - by Massimo
    I'm running a Squid proxy (latest version, 3.1.4) on Linux CentOS 5.4 with Samba 3.5.4, in order to allow authenticated web access for domain users; everything works fine, and even Windows 7 clients are fully supported. Authentication is transparent for domain users, while it is explicitly requested for non-domain ones, and it works if the user can provide valid domain credentials. All nice and good. Then, Outlook Anywhere kicks in and pain and suffering ensue. When Outlook (be it 2007 or 2010, it doesn't matter) runs on Windows XP clients, it connects gracefully through the Squid proxy to its remote Exchange server. When it runs on Windows 7, it doesn't. If the authentication requirement is lifted from the proxy, everything works on Windows 7 too, so the problem is obviously related to NTLM authentication with Squid. Digging more deeply (WireShark), I discovered Outlook Anywhere uses HTTP 1.1 when it runs on Windows 7, while it uses HTTP 1.0 when on Windows XP. And it looks like Squid, even in its latest incarnation, still has some serious troubles handling HTTP 1.1 properly, particularly when SSL and proxy authentication are thrown in the mix. While waiting for Squid to fully and officially support HTTP 1.1 (and it looks like this could take quite a long time), I'm looking for one of the following solutions: Make Squid handle this correctly, if it is at all possible. Identify Outlook Anywhere connections and have Squid not require authentication for them. But it isn't easy: again, the behaviour of Outlook differs when running on Windows XP and Windows 7, and while on Windows XP Outlook sends a really nice user-agent string of "MSRPC", on Windows 7 it doesn't send any (why? WHY?!?). Force Outlook Anywhere to use HTTP 1.0 even when running on Windows 7. And no, this is not as simple as deselecting "use HTTP 1.1" in Internet Explorer, looks like Outlook ignores that setting and chooses on its own which protocol to use. Any other feasible solution which doesn't involve whitelisting specific destination Exchange servers, which is the last-resort solution I'm trying to avoid.

    Read the article

  • Dell PE2950 - slow IO rates for writing and reading locally

    - by OrenM
    I'm having a serious issue with dell server PE2950. The server has really slow IO rates, so slow that I'm not able to use it anymore I tried few things to solve this: changing disks to new disks (configured them as raid1) changing perc card + perc cables reinstalling the OS of course, had to cause of changing of disks, centos 5.5 x64bit firmware update to everything virtual disks policy: No Read Ahead,Write Back, disk cache policy disabled. openmanage doesn't alert about anything, also i ran dell's diag tests, everything passed, also dell didn't see anything in deset log. dell offered to reseat everything, including the cpu, we did that as well, still io rates are slow I have several PE2950 servers, and I never had such a thing with any of those. All have similar or exact hardware as this one, all configured the same, with the same os centos 5.5 x64, same disks, same raid, same policy. Just for comparison: the problematic PE2950 server: [root@bad ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 27.7946 seconds, 58.9 MB/s real 0m33.968s user 0m0.531s sys 0m26.000s good PE2950 server (with the exact same hardware): [root@good ~]# time sh -c "dd if=/dev/zero of=/tmp/ddfile bs=8k count=200000 && sync" 200000+0 records in 200000+0 records out 1638400000 bytes (1.6 GB) copied, 3.19999 seconds, 512 MB/s real 0m7.694s user 0m0.053s sys 0m4.057s Hopefully you will have an idea what can cause the problem.

    Read the article

< Previous Page | 31 32 33 34 35 36 37 38 39 40 41 42  | Next Page >