Search Results

Search found 951 results on 39 pages for 'restricted'.

Page 28/39 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • C++11 Tidbits: Decltype (Part 2, trailing return type)

    - by Paolo Carlini
    Following on from last tidbit showing how the decltype operator essentially queries the type of an expression, the second part of this overview discusses how decltype can be syntactically combined with auto (itself the subject of the March 2010 tidbit). This combination can be used to specify trailing return types, also known informally as "late specified return types". Leaving aside the technical jargon, a simple example from section 8.3.5 of the C++11 standard usefully introduces this month's topic. Let's consider a template function like: template <class T, class U> ??? foo(T t, U u) { return t + u; } The question is: what should replace the question marks? The problem is that we are dealing with a template, thus we don't know at the outset the types of T and U. Even if they were restricted to be arithmetic builtin types, non-trivial rules in C++ relate the type of the sum to the types of T and U. In the past - in the GNU C++ runtime library too - programmers used to address these situations by way of rather ugly tricks involving __typeof__ which now, with decltype, could be rewritten as: template <class T, class U> decltype((*(T*)0) + (*(U*)0)) foo(T t, U u) { return t + u; } Of course the latter is guaranteed to work only for builtin arithmetic types, eg, '0' must make sense. In short: it's a hack. On the other hand, in C++11 you can use auto: template <class T, class U> auto foo(T t, U u) -> decltype(t + u) { return t + u; } This is much better. It's generic and a construct fully supported by the language. Finally, let's see a real-life example directly taken from the C++11 runtime library as implemented in GCC: template<typename _IteratorL, typename _IteratorR> inline auto operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) -> decltype(__y.base() - __x.base()) { return __y.base() - __x.base(); } By now it should appear be completely straightforward. The availability of trailing return types in C++11 allowed fixing a real bug in the C++98 implementation of this operator (and many similar ones). In GCC, C++98 mode, this operator is: template<typename _IteratorL, typename _IteratorR> inline typename reverse_iterator<_IteratorL>::difference_type operator-(const reverse_iterator<_IteratorL>& __x, const reverse_iterator<_IteratorR>& __y) { return __y.base() - __x.base(); } This was guaranteed to work well with heterogeneous reverse_iterator types only if difference_type was the same for both types.

    Read the article

  • Cloud service and IM protocol advice, for a backend to group chat mobile app

    - by Jonathan
    Overview I’m going to develop an app on Android and iOS. It will allow users to set up group ‘chat rooms’ and talk on chat rooms set up by other users. The service needs to be highly scalable, such that it could accommodate a massive increase in users overnight (we can only dream). Chat requirements The chat protocol used should be flexible: it should allow me to determine who can view/post on ‘chat rooms’ based on certain other factors, as determined by the first poster/creator of the particular ‘chat room’. It should also allow for users to simply install the app and begin using the service, after only providing a simple nickname (which could be changed later). Chat protocol plans Having looked around I think the XMPP protocol is the best candidate. In particular the Multi-user chat extension looks like what I’ll need. Would this be most suited to my requirements, or do you know another potential solution? Cloud service I have been deciding between Amazon Web Services, Google App Engine and Windows Azure. I’m coming to the conclusion that Azure will be best, as it is easier to manage than AWS (ease of scalability will be a key factor in the design), I think it may be less restricted than GAE, plus Azure will soon have toolkits to allow easy interfacing with both Android and iOS phones. Is this the decision you would have made, or would you recommend/look into other cloud services? General project philosophy I have only recently started looking into this project’s feasibility, and am no expert on any of its aspects. So wherever possible I will leave the actual implementations to experts, i.e. choosing a higher-level cloud service, using a well-documented plugin of a, proven reliable, group chat protocol etc. My background I have some programming knowledge from a computer science degree. Main languages I’ve used have been Java and Python, but I don’t want this to affect design decisions for the project. The most appropriate languages for the task should be used, i.e. I don’t mind learning a lot of new skills (my current programming levels are relatively basic anyway). Thank you Thanks for reading, and any advice you have about any aspect would be greatly appreciated :-)

    Read the article

  • Randomly generate directed graph on a grid

    - by Talon876
    I am trying to randomly generate a directed graph for the purpose of making a puzzle game similar to the ice sliding puzzles from Pokemon. This is essentially what I want to be able to randomly generate: http://bulbanews.bulbagarden.net/wiki/Crunching_the_numbers:_Graph_theory. I need to be able to limit the size of the graph in an x and y dimension. In the example given in the link, it would be restricted to an 8x4 grid. The problem I am running into is not randomly generating the graph, but randomly generating a graph, which I can properly map out in a 2d space, since I need something (like a rock) on the opposite side of a node, to make it visually make sense when you stop sliding. The problem with this is that sometimes the rock ends up in the path between two other nodes or possibly on another node itself, which causes the entire graph to become broken. After discussing the problem with a few people I know, we came to a couple of conclusions that may lead to a solution. Including the obstacles in the grid as part of the graph when constructing it. Start out with a fully filled grid and just draw a random path and delete out blocks that will make that path work. The problem then becomes figuring out which ones to delete to avoid introducing an additional, shorter path. We were also thinking a dynamic programming algorithm may be beneficial, though none of us are too skilled with creating dynamic programming algorithms from nothing. Any ideas or references about what this problem is officially called (if it's an official graph problem) would be most helpful. Here are some examples of what I have accomplished so far by just randomly placing blocks and generating the navigation graph from the chosen start/finish. The idea (as described in the previous link) is you start at the green S and want to get to the green F. You do this by moving up/down/left/right and you continue moving in the direction chosen until you hit a wall. In these pictures, grey is a wall, white is the floor, and the purple line is the minimum length from start to finish, and the black lines and grey dots represented possible paths. Here are some bad examples of randomly generated graphs: http://i.stack.imgur.com/9uaM6.png Here are some good examples of randomly generated (or hand tweaked) graphs: i.stack.imgur.com/uUGeL.png (can't post another link, sorry) I've also seemed to notice the more challenging ones when actually playing this as a puzzle are ones which have lots of high degree nodes along the minimum path.

    Read the article

  • How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x?

    - by TorakTu
    How can I get my ATI / AMD drivers to work with any kernel above 3.2.0.x ? WHAT DID WORK Installed original AMD64 version of Ubuntu 12.04 ISO image. Burned DVD and installed which shown kernel 3.2.0-23 to begin with. Got 5.1 surround sound working. Got ATI ( Now its AMD ) video drivers installed for my Radeon HD R6870 Video card from AMD's website. fglrxinfo came up and reported as normal. THE PROBLEM Kernel 3.2.0.x kept locking up so I tried higher kernel versions. But ATI / AMD Drivers do not install on any kernel Above 3.2.0.x WHAT I HAVE TRIED I have gone over this tutorial many times ( https://help.ubuntu.com/community/BinaryDriverHowto/ATI ) and it doesn't work on ANY kernel except 3.2.0.x. The problems I am having here are that the ATI / AMD drivers working for the 12.04 Precise with kernel 3.2.0-23 and 24, But the computer kept locking up. Although all my games would work, the lock ups were random and were constant. So I looked all over the web for 3 days trying to find an answer and the lock up issue was said to just update the kernel. So I did. Have tried many kernels. All of them .. no lock ups. BUT the Restricted AMD drivers from the AMD website will not install. And none of the OpenSource AMD drivers have EVER installed no matter what Kernel or version I tried. EXAMPLE OUTPUT OF 3D TYPE OF ERRORS Javax.media.opengl.GLException: glXGetConfig failed: error code GLX_NO_EXTENSION at com.sun.opengl.impl.x11.X11GLDrawableFactory.glXGetConfig(X11GLDrawableFactory.java:651) at com.sun.opengl.impl.x11.X11GLDrawableFactory.xvi2GLCapabilities(X11GLDrawableFactory.java:350) at com.sun.opengl.impl.x11.X11GLDrawableFactory.chooseGraphicsConfiguration(X11GLDrawableFactory.java:174) at javax.media.opengl.GLCanvas.chooseGraphicsConfiguration(GLCanvas.java:520) at javax.media.opengl.GLCanvas.<init>(GLCanvas.java:131) at haven.HavenPanel.<init>(HavenPanel.java:68) at haven.HavenPanel.<init>(HavenPanel.java:78) at haven.MainFrame.<init>(MainFrame.java:182) at haven.MainFrame.main2(MainFrame.java:306) at haven.MainFrame.access$100(MainFrame.java:34) at haven.MainFrame$7.run(MainFrame.java:360) at java.lang.Thread.run(Thread.java:722) And of course this is what fglrxinfo shows : X Error of failed request: BadRequest (invalid request code or no such operation) Major opcode of failed request: 139 (ATIFGLEXTENSION) Minor opcode of failed request: 66 () Serial number of failed request: 13 Current serial number in output stream: 13 EDIT : I forgot to mention that I DID look at this post over the last few days and it did not help.

    Read the article

  • Addressing threats introduced by the BYOD trend

    - by kyap
    With the growth of the mobile technology segment, enterprises are facing a new type of threats introduced by the BYOD (Bring Your Own Device) trend, where employees use their own devices (laptops, tablets or smartphones) not necessarily secured to access corporate network and information.In the past - actually even right now, enterprises used to provide laptops to their employees for their daily work, with specific operating systems including anti-virus and desktop management tools, in order to make sure that the pools of laptop allocated are spyware or trojan-horse free to access the internal network and sensitive information. But the BYOD reality is breaking this paradigm and open new security breaches for enterprises as most of the username/password based systems, especially the internal web applications, can be accessed by less or none protected device.To address this reality we can adopt 3 approaches:1. Coué's approach: Close your eyes and assume that your employees are mature enough to know what he/she should or should not do.2. Consensus approach: Provide a list of restricted and 'certified' devices to the internal network. 3. Military approach: Access internal systems with certified laptop ONLYIf you choose option 1: Thanks for visiting my blog and I hope you find the others entries more useful :)If you choose option 2: The proliferation of new hardware and software updates every quarter makes this approach very costly and difficult to maintain.If you choose option 3: You need to find a way to allow the access into your sensitive application from the corporate authorized machines only, managed by the IT administrators... but how? The challenge with option 3 is to find out how end-users can restrict access to certain sensitive applications only from authorized machines, or from another angle end-users can not access the sensitive applications if they are not using the authorized machine... So what if we find a way to store the applications credential secretly from the end-users, and then automatically submit them when the end-users access the application? With this model, end-users do not know the username/password to access the applications so even if the end-users use their own devices they will not able to login. Also, there's no need to reconfigure existing applications to adapt to the new authenticate scheme given that we are still leverage the same username/password authenticate model at the application level. To adopt this model, you can leverage Oracle Enterprise Single Sign On. In short, Oracle ESSO is a desktop based solution, capable to store credentials of Web and Native based applications. At the application startup and if it is configured as an esso-enabled application - check out my previous post on how to make Skype essso-enabled, Oracle ESSO takes over automatically the sign-in sequence with the store credential on behalf of the end-users. Combined with Oracle ESSO Provisioning Gateway, the credentials can be 'pushed' in advance from an actual provisioning server, like Oracle Identity Manager or Tivoli Identity Manager, so the end-users can login into sensitive application without even knowing the actual username and password, so they can not login with other machines rather than those secured by Oracle ESSO.Below is a graphical illustration of this approach:With this model, not only you can protect the access to sensitive applications only from authorized machine, you can also implement much stronger Password Policies in terms of Password Complexity as well as Password Reset Frequency but end-users will not need to remember the passwords anymore.If you are interested, do not hesitate to check out the Oracle Enterprise Single Sign-on products from OTN !

    Read the article

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • Welcome to ubiquitous file sharing (December 08, 2009)

    - by user12612012
    The core of any file server is its file system and ZFS provides the foundation on which we have built our ubiquitous file sharing and single access control model.  ZFS has a rich, Windows and NFSv4 compatible, ACL implementation (ZFS only uses ACLs), it understands both UNIX IDs and Windows SIDs and it is integrated with the identity mapping service; it knows when a UNIX/NIS user and a Windows user are equivalent, and similarly for groups.  We have a single access control architecture, regardless of whether you are accessing the system via NFS or SMB/CIFS.The NFS and SMB protocol services are also integrated with the identity mapping service and shares are not restricted to UNIX permissions or Windows permissions.  All access control is performed by ZFS, the system can always share file systems simultaneously over both protocols and our model is native access to any share from either protocol.Modal architectures have unnecessary restrictions, confusing rules, administrative overhead and weird deployments to try to make them work; they exist as a compromise not because they offer a benefit.  Having some shares that only support UNIX permissions, others that only support ACLs and some that support both in a quirky way really doesn't seem like the sort of thing you'd want in a multi-protocol file server.  Perhaps because the server has been built on a file system that was designed for UNIX permissions, possibly with ACL support bolted on as an add-on afterthought, or because the protocol services are not truly integrated with the operating system, it may not be capable of supporting a single integrated model.With a single, integrated sharing and access control model: If you connect from Windows or another SMB/CIFS client: The system creates a credential containing both your Windows identity and your UNIX/NIS identity.  The credential includes UNIX/NIS IDs and SIDs, and UNIX/NIS groups and Windows groups. If your Windows identity is mapped to an ephemeral ID, files created by you will be owned by your Windows identity (ZFS understands both UNIX IDs and Windows SIDs). If your Windows identity is mapped to a real UNIX/NIS UID, files created by you will be owned by your UNIX/NIS identity. If you access a file that you previously created from UNIX, the system will map your UNIX identity to your Windows identity and recognize that you are the owner.  Identity mapping also supports access checking if you are being assessed for access via the ACL. If you connect via NFS (typically from a UNIX client): The system creates a credential containing your UNIX/NIS identity (including groups). Files you create will be owned by your UNIX/NIS identity. If you access a file that you previously created from Windows and the file is owned by your UID, no mapping is required. Otherwise the system will map your Windows identity to your UNIX/NIS identity and recognize that you are the owner.  Again, mapping is fully supported during ACL processing. The NFS, SMB/CIFS and ZFS services all work cooperatively to ensure that your UNIX identity and your Windows identity are equivalent when you access the system.  This, along with the single ACL-based access control implementation, results in a system that provides that elusive ubiquitous file sharing experience.

    Read the article

  • "Untrusted packages could compromise your system's security." appears while trying to install anything

    - by maria
    Hi I've freshly installed Ubuntu 10.4 on a new computer. I'm trying to install on it application I need (my old computer is broken and I have to send it to the service). I've managed to install texlive and than I can't install anything else. All software I want to have is what I have succesfuly installed on my old computer (with the same version of Ubuntu), so I don't understand, why terminal says (sorry, the terminal talks half English, half Polish, but I hope it's enough): maria@marysia-ubuntu:~$ sudo aptitude install emacs Czytanie list pakietów... Gotowe Budowanie drzewa zaleznosci Odczyt informacji o stanie... Gotowe Reading extended state information Initializing package states... Gotowe The following NEW packages will be installed: emacs emacs23{a} emacs23-bin-common{a} emacs23-common{a} emacsen-common{a} 0 packages upgraded, 5 newly installed, 0 to remove and 0 not upgraded. Need to get 23,9MB of archives. After unpacking 73,8MB will be used. Do you want to continue? [Y/n/?] Y WARNING: untrusted versions of the following packages will be installed! Untrusted packages could compromise your system's security. You should only proceed with the installation if you are certain that this is what you want to do. emacs emacs23-bin-common emacsen-common emacs23-common emacs23 Do you want to ignore this warning and proceed anyway? To continue, enter "Yes"; to abort, enter "No" I was trying to install other editors as well, with the same result. As I decided that I might be sure that I know the package I want to install is secure, finaly I've entered "Yes". The installation ended succesfuly, but editor don't understand any .tex file (.tex files are for sure fine): this is pdfTeX, Version 3.1415926-1.40.10 (TeX Live 2009/Debian) restricted \write18 enabled. entering extended mode (./Szarfi.tex ! Undefined control sequence. l.2 \documentclass {book} ? What's more, I've realised that in Synaptic Manager there is no package which would be marked as supported by Canonical... Any tips? Thanks in advance

    Read the article

  • Is return-type-(only)-polymorphism in Haskell a good thing?

    - by dainichi
    One thing that I've never quite come to terms with in Haskell is how you can have polymorphic constants and functions whose return type cannot be determined by their input type, like class Foo a where foo::Int -> a Some of the reasons that I do not like this: Referential transparency: "In Haskell, given the same input, a function will always return the same output", but is that really true? read "3" return 3 when used in an Int context, but throws an error when used in a, say, (Int,Int) context. Yes, you can argue that read is also taking a type parameter, but the implicitness of the type parameter makes it lose some of its beauty in my opinion. Monomorphism restriction: One of the most annoying things about Haskell. Correct me if I'm wrong, but the whole reason for the MR is that computation that looks shared might not be because the type parameter is implicit. Type defaulting: Again one of the most annoying things about Haskell. Happens e.g. if you pass the result of functions polymorphic in their output to functions polymorphic in their input. Again, correct me if I'm wrong, but this would not be necessary without functions whose return type cannot be determined by their input type (and polymorphic constants). So my question is (running the risk of being stamped as a "discussion quesion"): Would it be possible to create a Haskell-like language where the type checker disallows these kinds of definitions? If so, what would be the benefits/disadvantages of that restriction? I can see some immediate problems: If, say, 2 only had the type Integer, 2/3 wouldn't type check anymore with the current definition of /. But in this case, I think type classes with functional dependencies could come to the rescue (yes, I know that this is an extension). Furthermore, I think it is a lot more intuitive to have functions that can take different input types, than to have functions that are restricted in their input types, but we just pass polymorphic values to them. The typing of values like [] and Nothing seems to me like a tougher nut to crack. I haven't thought of a good way to handle them. I doubt I am the first person to have had thoughts like these. Does anybody have links to good discussions about this Haskell design decision and the pros/cons of it?

    Read the article

  • Are separate business objects needed when persistent data can be stored in a usable format?

    - by Kylotan
    I have a system where data is stored in a persistent store and read by a server application. Some of this data is only ever seen by the server, but some of it is passed through unaltered to clients. So, there is a big temptation to persist data - whether whole rows/documents or individual fields/sub-documents - in the exact form that the client can use (eg. JSON), as this removes various layers of boilerplate, whether in the form of procedural SQL, an ORM, or any proxy structure which exists just to hold the values before having to re-encode them into a client-suitable form. This form can usually be used on the server too, though business logic may have to live outside of the object, On the other hand, this approach ends up leaking implementation details everywhere. 9 times out of 10 I'm happy just to read a JSON structure out of the DB and send it to the client, but 1 in every 10 times I have to know the details of that implicit structure (and be able to refactor access to it if the stored data ever changes). And this makes me think that maybe I should be pulling this data into separate business objects, so that business logic doesn't have to change when the data schema does. (Though you could argue this just moves the problem rather than solves it.) There is a complicating factor in that our data schema is constantly changing rapidly, to the point where we dropped our previous ORM/RDBMS system in favour of MongoDB and an implicit schema which was much easier to work with. So far I've not decided whether the rapid schema changes make me wish for separate business objects (so that server-side calculations need less refactoring, since all changes are restricted to the persistence layer) or for no separate business objects (because every change to the schema requires the business objects to change to stay in sync, even if the new sub-object or field is never used on the server except to pass verbatim to a client). So my question is whether it is sensible to store objects in the form they are usually going to be used, or if it's better to copy them into intermediate business objects to insulate both sides from each other (even when that isn't strictly necessary)? And I'd like to hear from anybody else who has had experience of a similar situation, perhaps choosing to persist XML or JSON instead of having an explicit schema which has to be assembled into a client format each time.

    Read the article

  • List of common pages to have in the footer [closed]

    - by user359650
    I would like to post this question as a reference for webmasters wondering what pages they should include in the footer. I will use answers to complete my initial list: About us / About MyCompany / MyCompany About / About us: description about the company, its mission, and its vision. History: summary of milestones achieved by the company. The team / Management / Board of directors: depending on size of the company there may be one of more pages describing the people involved in the company, depending on their position. Awards: list of awards received by the company if any. In the press / They're talking about us: list of links to external websites, usually highly regarded news websites, which mentioned the company in one of their articles. Media Wallpapers: wallpapers with company logo in different colors and formats that fans can set as desktop image for their computer. logos: company logo in different colors and formats that websites/blogs posting about the website can use for illustration purposes. Media kits: documents, usually in PDF format summarizing the key company figures and facts that journalists can download and read to get a quick overview of the company. Misc Contact / Contact us: contact details the company is prepared to disclose if any (address, email, phone) or contact form. Careers / Jobs / Join us: list of open vacancies with contact form to apply. Investors / Partners / Publishers: information and contact forms for companies willing to become Investors/Partners/Publishers or login page to access portal restricted to those who already are. FAQ: list of common questions and answers to guide users and reduce number of support requests. Follow us / Community Facebook / Twitter / Google+: links to the company's pages/accounts on various social networks. Legal Terms / Terms of use / Terms & Conditions: rules users must follow when browsing the website. Privacy / Privacy Statement: explanations as to how the company deals with users' personal data and what users can do about it (request information to be deleted...). cookies: page that starts appearing on more and more websites due to new regulation (notably EU) imposing more transparency and control for users about cookies (e.g. BBC cookie page). Any input is welcome PS: if someone with enough rep could add the footer tag that would be great (min. 300 required).

    Read the article

  • Why does my PowerShell script hang when called in PSEXEC via a batch (.cmd) file?

    - by Kev
    I'm trying to remotely execute a PowerShell script using PSEXEC. The PowerShell script is called via a .cmd batch file. The reason we do this is to change the execution policy, run the powershell script then reset the execution policy again: On the remote server do-tasks.cmd looks like: powershell -command "&{ set-executionpolicy unrestricted}" powershell DoTasks.ps1 powershell -command "&{ set-executionpolicy restricted}" The PowerShell script DoTasks.ps1 just does this for now: Write-Output "Hello World!" Both of these scripts live in c:\windows\system32 (for now) just so they're on the PATH. On the originating server I do this: psexec \\web1928 -u administrator -p "adminpassword" do-tasks.cmd When this runs I get the following response at the command line: c:\Windows\system32>powershell -command "&{ set-executionpolicy unrestricted}" and the script runs no further. I can't ctrl-c to break the script and I just see ^C characters, I can type input from the keyboard and the characters are echoed to console. On the remote server I see that PowerShell.exe and CMD.exe are running in Task Manager's Process tab. If I end these processes then control returns to the command line on the originating server. I have tried this with just a simple .cmd batch file with a @echo hello world and it works just fine. Running do-tasks.cmd on the remote server via an RDP session works ok as well. Why is my remote batch file getting stuck when executing via PSEXEC?

    Read the article

  • FTP timing out after login

    - by Imran
    For some reasons I cant access any of my accounts on my dedicated server via FTP. It simply times out when it tried to display the directories. Heres a log from FileZila... Status: Resolving address of testdomain.com Status: Connecting to 64.237.58.43:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [TLS] ---------- Response: 220-You are user number 3 of 50 allowed. Response: 220-Local time is now 19:39. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER testaccount Response: 331 User testaccount OK. Password required Command: PASS ******** Response: 230-User testaccount has group access to: testaccount Response: 230 OK. Current restricted directory is / Command: SYST Response: 215 UNIX Type: L8 Command: FEAT Response: 211-Extensions supported: Response: EPRT Response: IDLE Response: MDTM Response: SIZE Response: REST STREAM Response: MLST type*;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*; Response: MLSD Response: ESTP Response: PASV Response: EPSV Response: SPSV Response: ESTA Response: AUTH TLS Response: PBSZ Response: PROT Response: 211 End. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Response: 227 Entering Passive Mode (64,237,58,43,145,153) Command: MLSD Response: 150 Accepted data connection Response: 226-ASCII Response: 226-Options: -a -l Response: 226 18 matches total Error: Connection timed out Error: Failed to retrieve directory listing I have restarted the FTP service serveral times but still It doesnt loads. I only have this problem when my server is reaching it peak usage which is still only 1.0 (4 cores), 40% of 4GB ram. The ftp connections isnt maxed out because only me and my colleague have access to FTP on the server.

    Read the article

  • Distributed development staff needing a common IP range

    - by bakasan
    I work on a development staff that is geographically distributed, mostly all throughout the state of CA, but several key members also must travel frequently. We rely quite heavily on a 3rd party provider API for a great deal of our subsystems (can't get into who it is or what they do). The 3rd party however is quite stringent on network access and have no notion of a development sandbox. Access is restricted to 2, 3 IP numbers and that's about it. Once we account for our production servers, that leaves us with an IP or two to spare for our dev team--which is still problematic as people's home IP changes, people travel, we have more than 2 devs, etc. Wide IP blocks are not permitted by the 3rd party. Nor will they allow dynamic DNS type services. There is no simple console to swap IPs on the fly either (e.g. if a dev's IP at home changes or they are on the road). As none of us are deep network experts, I'm wondering what our viable options are? Are there such things as 3rd party hosts to VPNs? Generally I think of a VPN as a mechanism to gain access to a home office, but the notion would be a 3rd party VPN that we'd all connect to and we'd register this as an IP origin w/ our 3rd party. We've considered using Amazon EC2 to effectively host a dev environment for each dev and using that to connect. Amazon only gives you so many static IPs however (I believe 5?) so this would only be a stop gap solution until our team size out strips our IP count at Amazon. Those were the only viable thoughts that I had, but again, I'm far from a networking guy. Tried searching for similar threads, but I'm not even sure I know the right vernacular to look around for.

    Read the article

  • nginx + wordpress in /wordpress subdir

    - by nkr1pt
    I installed nginx and would like to setup wordpress as a final step. I followed many howtos but am unable to get it working. The setup is fairly straightforward, the root dir of the webserver is /data/Sites/nkr1pt.homelinux.net. In that root dir I created a symlink to the wordpress folder in /usr/local/wordpress, so in fact all wordpress files can be accessed at /data/Sites/nkr1pt.homelinux.net/wordpress. Permissions are ok. The plan is to get wordpress working at http://sirius/wordpress, the server's name is sirius. spawn-fcgi is running and listening on port 7777. Here you can see the relevant config: server { listen 80; listen 8080; server_name sirius; root /data/Sites/nkr1pt.homelinux.net; passenger_enabled on; passenger_base_uri /redmine; #charset koi8-r; #access_log logs/access.log main; location ^~ /data { root /data/Sites/nkr1pt.homelinux.net; autoindex on; auth_basic "Restricted"; auth_basic_user_file htpasswd; } location ^~ /dump { root /data/Sites/nkr1pt.homelinux.net; autoindex on; } location ^~ /wordpress { try_files $uri $uri/ /wordpress/index.php; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:7777 location ~ \.php$ { #fastcgi_split_path_info ^(/wordpress)(/.*)$; fastcgi_pass localhost:7777; #fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; #index index.php; } please note that redmine, and the locations dump and data are working perfectly, it is only wordpress that I cannot get to work. Can you please help me to the correct wordpress configuration in nginx? All help is much appreciated!

    Read the article

  • netlogon errors

    - by rorr
    I have two instances of mssql 2005 and am using CA XOSoft replication. The master is a failover cluster and the replica is a standalone server. They are all running Server 2003 sp2 x64. Same patch levels on all servers. This setup has worked great for several months until we recently restricted the RPC ports on both nodes of the master(5000 - 6000 using rpccfg.exe). We have to implement egress filtering, thus the limiting of the ports. We began receiving login errors for sql windows authentication and NETLOGON Event ID: 5719: This computer was not able to set up a secure session with a domain controller in domain due to the following: Not enough storage is available to process this command. This may lead to authentication problems. Make sure that this computer is connected to the network. If the problem persists, please contact your domain administrator. We also see group policies failing to update and cluster file shares go offline at the same time. The RPC ports were set back to default when we started seeing these problems and the servers rebooted, but the problems persist. The domain controllers are not showing any errors. Running dcdiag and netdiag shows everything is fine. We have noticed that the XOSoft service ws_rep.exe is using a lot of handles(8 - 9k), about the same number that sqlserver is using. As soon as xosoft replication is stopped the login errors cease and everything functions correctly. I have opened a ticket with CA for XOSoft, but I'm not sure that the problem is actually xosoft, but that it is the one bringing the problem to light. I'm looking for tips on debugging RPC problems. Specifically on limiting the ports and then reverting the changes.

    Read the article

  • How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by Gary
    I have a VPS in the USA running Ubuntu. I want to setup something similar to http://www.usvideo.org Basically, USVIDEO is a DNS service that allows Canadians to access American content like Hulu, Netflix, NBC, and etc (restricted by geographical IP). Here is how I think USVideo does it: Clients (PS3, XBOX, PC) specifies the DNS server(s) as specified on USVIDEO.org's website. If the DNS request is a video/audio site such as Netflix or Pandora, forward the request to a proxy. Otherwise, for all other requests, forward it to a different DNS server. If the specific video/audio URL is requested, return the address of the proxy server, which in turn relays traffic to the destination video/audio domain via the U.S. gateway so that it appears that the access is coming from a U.S. IP address. Once the DNS request has passed the U.S. IP address check, their proxy server steps out of the loop and lets the video streaming site contact you directly to start the video stream. This trick relies on the way that the video streaming sites check the country of your IP address once up front, but don't actually check the country of the destination IP address while the video is streaming. What is elegant about this solution is that a VPN Tunnel is not required to bypass geographical IP checks from certain websites. All that is required on the client side is to specify the DNS server (the VPS). If a certain site is geographically locked, just forward the traffic to a proxy, and that's it. These sites can be specified in the DNS entries, or perhaps in the proxy service to redirect the DNS request to its own proxy. I believe what I need to setup something similar is Squid Proxy, IPTables, and DNS. What I need help is how to exactly approach this? Would Squid Proxy be setup as a transparent proxy?

    Read the article

  • FTP on Linux "Failed to retrieve directory listing" not firewall issue

    - by Jaka Prasnikar
    I've got an VPS in germany running Debian X64. I have very strange issue. I have ISPConfig CP installed using proftpd and I can not connect to FTP by any means. Few hours ago I've had installed DirectAdmin on CentOS same VPS and same issue. Simply when I connect to FTP server I get these: Status: Resolving address of web02.defikon.com Status: Connecting to 130.255.190.71:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 12:15. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER default1 Response: 331 User default1 OK. Password required Command: PASS ****** Response: 230-User default1 has group access to: client0 sshusers Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing I even tried telnet localhost 21 and the same happends. Once I issue command "LIST" I get time out. I've tried every thing and I can't get this to work =( Please help ! P.S.: iptables is turned off.

    Read the article

  • Windows 32-bit and 64-bit and GPT

    - by MrLane
    I know similar questions have been asked before across several sites, but the answers at least to me have been confusing and conflicting. My understanding has always been that 64-bit Windows will create and use GPT disks just fine, but will not boot from them without a UEFI BIOS. Also my understanding WAS that 32-bit Windows could not use GPT at all and so is always restricted to 2.2TB disks, which was another reason to move to 64-bit on top of the 4GB memory limit. But I have now read that this isn't correct: 32-bit Windows will create and use GPT disks just as 64-bit does. The only resriction is that you can't boot 32-bit Windows even if you DO have a UEFI BIOS? I don't think much of the literature has explained this well. There are several tools floating around for creating virtual disks or 2.2+.8GB partition schemes and such for 32-bit systems. Why when it seems you can use GPT in 32-bit Windows anyway. It also seems that people blame MS for lagging behind with respect to all of this: but it seems the issue is with BIOS manufactures not supporting UEFI rather than MS not supporting GPT... Is my new understanding now correct?

    Read the article

  • How to connect the virtual networks of vmware guests running on different hosts?

    - by gyrolf
    In a test setup, we are running several virtual machines on a single vmware workstation host. All virtual machines are connected via a "host only" network. This runs fine up to 2 or 3 virtual machines (depending on the host hardware). To allow more virtual machines, we want to use more host machines. Details about the environment and applications: Host PCs are running Windows XP in a corporate intranet. VMware used is Workstation 6.5 Guests are running Windows Server 2003 All guests act as Web Servers One of the guests additionally acts as Windows File server, offering shared folders for the other guests to connect to. Restrictions: VMware guests shall not be visible from the intranet. Changes to the host PC are restricted by corporate policy. In the virtual network, no domain controller exists. All virtual machines are member of the same workgroup. Running the virtual network as NAT is possible. Port forwarding might be used if it does not conflict with ports used by the host PC. Looking for a solution, I found hints about using router or vpn software on the hosts, but without any details how to setup. (I found a similar question Sharing the network between 2 VMware hosts, but the answer was not sufficient for me.)

    Read the article

  • nginx + @font-face + Firefox / IE9

    - by Philip Seyfi
    Just transferred my site from a shared hosting to Linode's VPS, and I'm also completely new to nginx, so please don't be harsh if I missed something evident ^^ I've got my WordPress site running pretty well on nginx & MaxCDN, but my @font-face fonts (served from cdn.domain.com) stopped working in IE9 and FF (@font-face failed cross-origin request. Resource access is restricted.) I've googled for hours and tried adding all of the following to my config files: location ~* ^.+\.(eot|otf|ttf|woff)$ { add_header Access-Control-Allow-Origin *; } location ^/fonts/ { add_header Access-Control-Allow-Origin *; } location / { if ($request_filename ~* ^.*?/([^/]*?)$) { set $filename $1; } if ($filename ~* ^.*?\.(eot)|(otf)|(ttf)|(woff)$){ add_header 'Access-Control-Allow-Origin' '*'; } } With all of the following combinations: add_header Access-Control-Allow-Origin *; add_header 'Access-Control-Allow-Origin' *; add_header Access-Control-Allow-Origin '*'; add_header 'Access-Control-Allow-Origin' '*'; Of course, I've restarted nginx after every change. The headers just don't get sent at all no matter what I do. I have the default Ubuntu apt-get build nginx which should include the headers module by default... How do I check what modules are installed, or what else could be causing this error?

    Read the article

  • Nginx Server Block Port 8081 Path to Root Folder

    - by Pamela
    I'm trying to password protect all of port 8081 on my Nginx server. The only thing this port is used for is PhpMyAdmin. When I navigate to https://www.example.com:8081, I successfully get the default Nginx welcome page. However, when I try navigating to the PhpMyAdmin directory, https://www.example.com:8081/phpmyadmin, I get a "404 Not Found" page. Permission for my htpasswd file is set to 644. Here is the code for my server block: server { listen 8081; server_name example.com www.example.com; root /usr/share/phpmyadmin; auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } I have also tried entirely commenting out #root /usr/share/phpmyadmin; However, it doesn't make any difference. Is my problem confined to using the incorrect root path? If so, how can I find the root path for PhpMyAdmin? If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

  • Nginx ignores HTTP Authentication for WordPress login directory

    - by MrNerdy
    I am running WordPress in a subfolder of my domain for testing and development purposes on a VPS LEMP-stack. In order to password-protect the wp-login.php with an etxra layer, I used HTTP authentication for the wp-admin folder. The problem is that the http authentication is ignored. When the wp-login.php or wp-admin-folder is called, it goes directly to the normal WordPress-login. I installed everything from the command line in the following way: sudo apt-get install apache2-utils sudo htpasswd -c /var/www/bitmall/wp-admin/.htpasswd exampleuser New password: Re-type new password: Adding password for user exampleuser My Nginx configuration file looks like this: server { listen 80; root /var/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } location /bitmall/wp-admin/ { auth_basic "Restricted Section"; auth_basic_user_file /var/www/bitmall/wp-admin/.htpasswd; } location ~ /\.ht { deny all; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would appreciate your advive on this.

    Read the article

  • Parental Controls in Ubuntu - per user

    - by Hamish Downer
    I would like to set up parental controls on Ubuntu for a friend of mine. I want it so that the child user has the controls set, but the parent user is not restricted. To be clear, they are sharing one computer, so a router based solution won't help. And I would like a set of step by step instructions to do this. Just one way of doing it. I'm an experienced Ubuntu user, happy at the command line. I've spent quite some time googling for this along the way. I hope that the GChildCare project will eventually make this easy, but it is not ready yet. In the meantime, the WebContentControl GUI provides a way of managing parental controls, but apply them to every user on the computer (easy WebContentContol install instructions and detailed instructions, discussion and related links on ubuntuforums). The ubuntuforums post has a FAQ that states that user-specific configuration is not possible with WebContentControl, and then provides 3 links he used to help him do it. But they are far from step by step instructions. There is this thread which is notes along the way and linking to this article about squid and dansguardian. And then to these two dansguardian articles which are somewhat in depth ... So does anyone know of an existing guide to how to set up parental controls on ubuntu with some users not affected? If no one has come up with an answer after a little bit, I'll set up a community wiki answer so we can come up with a guide.

    Read the article

  • Mirror a Dropbox repository in Sharepoint and restrict access

    - by Dan Robson
    I'm looking for an elegant way to solve the following problem: My development team uses Dropbox for sharing documents amongst our immediate group. We'd like to put some of those documents into a SharePoint repository for the larger group to be able to access, as granting Dropbox access to the group at large is not ideal. However, we'd like to continue to be able to propagate changes to the SharePoint site simply by updating the files in Dropbox on our local client machines, and also vice versa - users granted access on SharePoint that update files in that workspace should be able to save their files and the changes should appear automatically on our client PC's. I've already done the organization of the folders so that in Dropbox, there exists a SharePoint folder that looks something like this: SharePoint ----Team --------Restricted Access Folders ----Organization --------Open Access Folders The Dropbox master account and the SharePoint master account are both set up on my file server. Unfortunately, Dropbox doesn't seem to allow syncing of folders anywhere above the \Dropbox\ part of the file system's hierarchy - or all I would have to do is find where the Sharepoint repository is maintained locally, and I'd be golden. So it seems I have to do some sort of 2-way synchronization between the Dropbox folder on the file server and the SharePoint folder on the file server. I messed around with Microsoft SyncToy, but it seems to be lacking in the area of real-time updating - and as much as I love rsync, I've had nothing but bad luck with it on Windows, and again, it has to be kicked off manually or through Task Scheduler - and I just have a feeling if I go down that route, it's only a matter of time before I get conflicts all over the place in either Dropbox, SharePoint, or both. I really want something that's going to watch both folders, and when one item changes, the other automatically updates in "real-time". It's quite possible I'm going down the entirely wrong route, which is why I'm asking the question. For simplicity's sake, I'll restate the goal: To be able to update Dropbox and have it viewable on the SharePoint site, or to update the SharePoint site and have it viewable in Dropbox. And since I'm a SharePoint noob, I'll also need help hiding the "Team" subfolder from everyone not in a specific group in AD.

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >