Search Results

Search found 22880 results on 916 pages for 'session cookie domain'.

Page 101/916 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • SSH Public Key Authentication only works if active session exists before

    - by Webx10
    I have a rather strange problem with my SSH configuration. I set up my server with the help of a Remote Access Card and configured everything with a KVM viewer. So while being logged into the server via the KVM Viewer I configured SSH with only pubkey and tried to login from my local laptop. It worked fine. If I quit the KVM Session (or logout with the user in the KVM session) I cannot login via ssh anymore (pubkey denied). SSH login only works as long as the user is somewhere still logged in. Any hints what the problem might be? Console output for a failed login (all personal data exchanged): OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /Users/mylocaluser/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 103: Applying options for * debug1: Connecting to 100.100.100.100 [100.100.100.100] port 12345. debug1: Connection established. debug1: identity file /Users/mylocaluser/.ssh/id_rsa type 1 debug1: identity file /Users/mylocaluser/.ssh/id_rsa-cert type -1 debug1: identity file /Users/mylocaluser/.ssh/id_dsa type -1 debug1: identity file /Users/mylocaluser/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 pat OpenSSH* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr [email protected] none debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ab:12:23:34:45:56:67:78:89:90:12:23:34:45:56:67 debug1: Host '[100.100.100.100]:12345' is known and matches the RSA host key. debug1: Found key in /Users/mylocaluser/.ssh/known_hosts:36 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/mylocaluser/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Offering RSA public key: /Users/mylocaluser/.ssh/id_rsa2 debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/mylocaluser/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). Console output for a successfull login (only possible while "active session" exists): OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011 debug1: Reading configuration data /Users/mylocaluser/.ssh/config debug1: Reading configuration data /etc/ssh_config debug1: /etc/ssh_config line 20: Applying options for * debug1: /etc/ssh_config line 103: Applying options for * debug1: Connecting to 100.100.100.100 [100.100.100.100] port 12345. debug1: Connection established. debug1: identity file /Users/mylocaluser/.ssh/id_rsa type 1 debug1: identity file /Users/mylocaluser/.ssh/id_rsa-cert type -1 debug1: identity file /Users/mylocaluser/.ssh/id_dsa type -1 debug1: identity file /Users/mylocaluser/.ssh/id_dsa-cert type -1 debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_6.2 debug1: Remote protocol version 2.0, remote software version OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 debug1: match: OpenSSH_6.6.1p1 Ubuntu-2ubuntu2 pat OpenSSH* debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-ctr [email protected] none debug1: kex: client->server aes128-ctr [email protected] none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Server host key: RSA ab:12:23:34:45:56:67:78:89:90:12:23:34:45:56:67 debug1: Host '[100.100.100.100]:12345' is known and matches the RSA host key. debug1: Found key in /Users/mylocaluser/.ssh/known_hosts:36 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: Roaming not allowed by server debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/mylocaluser/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 debug1: Authentication succeeded (publickey). Authenticated to 100.100.100.100 ([100.100.100.100]:12345). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = de_DE.UTF-8 Welcome to Ubuntu 14.04.1 LTS

    Read the article

  • Unable to make the session state request to the session state server.

    - by Angry_IT_Guru
    For about 4-5 months now, I seem to be having this sporadic issue--mainly during our busiest time of the day between 10:30-11:45AM, where all my Windows 2003 web servers in a Microsoft NLB cluster start throwing session state server errors. A sample error is below. System.Web.HttpException: Unable to make the session state request to the session state server. Please ensure that the ASP.NET State service is started and that the client and server ports are the same. If the server is on a remote machine, please ensure that it accepts remote requests by checking the value of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\aspnet_state\Parameters\AllowRemoteConnection. If the server is on the local machine, and if the before mentioned registry value does not exist or is set to 0, then the state server connection string must use either 'localhost' or '127.0.0.1' as the server name. at System.Web.SessionState.OutOfProcSessionStateStore.MakeRequest(StateProtocolVerb verb, String id, StateProtocolExclusive exclusiveAccess, Int32 extraFlags, Int32 timeout, Int32 lockCookie, Byte[] buf, Int32 cb, Int32 networkTimeout, SessionNDMakeRequestResults& results) at System.Web.SessionState.OutOfProcSessionStateStore.SetAndReleaseItemExclusive(HttpContext context, String id, SessionStateStoreData item, Object lockId, Boolean newItem) at System.Web.SessionState.SessionStateModule.OnReleaseState(Object source, EventArgs eventArgs) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) Now I'm using ASP.NET State service on a centralized back-end Windows 2003 server that all servers communicate to. I was originally using SQL Server state for a couple years as well prior to having this issue. The problem with SQL wqas that when the issue occurred, it created a blocking situation which essentially impacted all users across all servers. The product company recommended that I use the standard ASP.NET State service as that was what they technically supported. Why this would make a difference is beyond me -- but I had no choice but to try it! I have attempted to create multiple application pools, adding additional servers, chaning TCP/IP timeout from 20 to 30 seconds, and even calling Microsoft ASP.NET product support, with very little success. I even recommended that they review whether they are using read-only session state instead of read/write per page request -- as I understand that this basically causes every page to make round-trips to state server even if state isn't being used on the page. Unfortunately, the application is developed by our product company and they insist that it is something with my environment because other clients do not have these sort of issues. However, I've talked to other clients and they tell me when they've seen issues like they, they've basically had to create another web farm. This issue almost seems like I've simply reached some architectural limit within the application... Microsoft's position on the issue is that the session state needs to be reduced and the returncode being reported back from the state server indicates buffers are full. To better understand the scope of issues (rather than wait for customers to call and complain), I installed ELMAH and configured it to send me e-mails when unhandled exceptions occur. I basically get 500-1000 e-mails during the time period of high activity! If any one has any other ideas I could try or better ways to troubleshoot, I'd appreciate it.

    Read the article

  • Transferring users and search engines to a new domain

    - by eftpotrm
    I've been asked to take over the maintnance of an existing site that's being reworked. At present it's serving localised content for several languages, but via a fairly unhelpful mechanism that means essentially search engines only have it indexed in English and any deep links will de facto appear in English as well. So, new localised sites are being built under separate domains - not just for this, there's other benefits. What we're then looking to do is to redirect users correctly to the new site, where appropriate. For humans this isn't a problem. We can send them through a gateway page on their first site visit, grab their language preference and put it in a cookie, then redirect them to the new localised content as soon as it's available. For search engines, this isn't so good... In principle I'm happy to simply bypass the gateway page and redirect known spiders to the new site, but this means we're serving radically different content (different URL even!) to human and robot users. Won't this therefore be regarded as cloaking and cause us grief? Anyone know a better way to handle this?

    Read the article

  • Circular dependency and object creation when attempting DDD

    - by Matthew
    I have a domain where an Organization has People. Organization Entity public class Organization { private readonly List<Person> _people = new List<Person>(); public Person CreatePerson(string name) { var person = new Person(organization, name); _people.Add(person); return person; } public IEnumerable<Person> People { get { return _people; } } } Person Entity public class Person { public Person(Organization organization, string name) { if (organization == null) { throw new ArgumentNullException("organization"); } Organization = organization; Name = name; } public Organization { get; private set; } public Name { get; private set; } } The rule for this relationship is that a Person must belong to exactly one Organization. The invariants I want to guarantee are: A person must have an organization this is enforced via the Person's constuctor An organization must know of its people this is why the Organization has a CreatePerson method A person must belong to only one organization this is why the organization's people list is not publicly mutable (ignoring the casting to List, maybe ToEnumerable can enforce that, not too concerned about it though) What I want out of this is that if a person is created, that the organization knows about its creation. However, the problem with the model currently is that you are able to create a person without ever adding it to the organizations collection. Here's a failing unit-test to describe my problem [Test] public void AnOrganizationMustKnowOfItsPeople() { var organization = new Organization(); var person = new Person(organization, "Steve McQueen"); CollectionAssert.Contains(organization.People, person); } What is the most idiomatic way to enforce the invariants and the circular relationship?

    Read the article

  • Migrating a Windows Server to Ubuntu Server to provide Samba, AFP and Roaming Profiles

    - by Dan
    I'm replacing our old Windows XP Pro office server with a HP Microserver running Ubuntu Server 12.04 LTS. I'm not a Linux expert but I can find my way around a terminal prompt, I'm a Mac user by choice. The office use a mix of Windows XP Pro machines and OSX Lion laptops. I included Samba during installation, and I'm planning on using Netatalk for the AFP and Bonjour sharing. I'd quite like to have samba make the server appear in 'My network places' on the Windows machines the way Bonjour makes it appear in finder on the Macs, if this is possible? I want to get to a point so that a user logging into Windows, gets connected to the Ubuntu server (do they need an Ubuntu user account?) which get them their shares and their Windows user profile (though a standard profile across users would do). The upshot is to make centralised control of user accounts (e.g. If a person leaves, killing their account on the server stops their Windows logon and ability to access Samba shares) and to ensure files aren't stored on the individual machines for backup/security purposes. I want to make this as simple as possible, so don't want to have loads of stuff I don't need, I just can't figure out: What I need at the server end: - will Samba be enough (already installed as part of initial installation), or will I need to cock around with LDAP (and how does this interact with Samba) - For someone of moderate Linux competence like me, is there a package that offers easy admin of user accounts, e.g. a GUI like phpLDAPadmin (if LDAP is necessary) How to configure the XP machines: - do I need to have the XP machines set up as a domain controller (I've no idea, really) - roaming profiles looks to offer the feature of putting the user's files on the server rather than the machine itself along with a profile that follows the user from machine to machine. Syncing Mac user's home folders with the server This is less of a concern because I can set up Time Machine if it comes to it, but I'd appreciate any recommendations of what approach I should take having the Mac home folders synced to the server.

    Read the article

  • DISA Cross Domain Enterprise Solutions on the NetBeans Platform

    - by Geertjan
    Bray 2.0 is a tool based on the NetBeans Platform that assists in creating valid Data Flow Configuration (DFC) files. The DFC Specification was developed to provide a standardized way for defining, validating, and approving data flows for use on cross-domain guarding solutions. A DFC document specifies key entities such as security domains, guards that facilitate data between security domains, data flows that describe how data travels between security domains, filters that transform and validate the data and more. Related info: http://www.disa.mil/Services/Information-Assurance/Cross-Domain-Solutions The Bray product is in development at Fulcrum IT (http://www.fulcrumco.com). The DFC Specification and Bray were developed in support of the US Department of Defense. Bray 2.0 marks the first release of Bray on the NetBeans Platform and utilizes a number of features that are core to the NetBeans Platform: Modular plugability. Bray consumers can integrate their own tools, file types, and more into the product with relative ease. Robust UI. The NetBeans Platform intuitive UI makes it easy to access and manipulate multiple aspects of a DFC. Explorer. The Explorer is a key component that makes the DFC XML easy to traverse, edit, and find errors. Context-sensitive help. JavaHelp can be readily integrated for the product as well as all the UI within. Editors. Any external file can be added to a DFC. Users can register their own editors or use the provided NetBeans editors to edit files. Printing. The NetBeans Platform Print API makes it easy to determine what should be printed and how.   A screenshot: Bray 2.0 provides a lot of key features in developing valid, robust DFC files:  XML validation. A DFC can be validated against the DFC schema specification. DFC Check List. An interactive, minimal guide for creating a complete DFC. Summary Window. The Summary Window functions like the Navigator in NetBeans IDE. The current "item of interest" is checked against various business rules and provides the ability to quickly find and fix errors. Change Log. Bray audits every change to a DFC and places them in a change log for users to peruse. Comments. Users can optionally add comments for other users to see. Digital signatures. DFC files can be digitally signed. A signature history and signature validation is provided in Bray. Pluggable security schemes. Bray ships with plain text and IC-ISM security schemes. If needed, users can integrate additional ones.  ...and more to come! New features for Bray are constantly in development including use of the NetBeans Visual Library, language support, and more. More screenshots:

    Read the article

  • DDD and Value Objects. Are mutable Value Objects a good candidate for Non Aggr. Root Entity?

    - by Tony
    Here is a little problem Have an entity, with a value object. Not a problem. I replace a value object for a new one, then nhibernate inserts the new value and orphan the old one, then deletes it. Ok, that's a problem. Insured is my entity in my domain. He has a collection of Addresses (value objects). One of the addresses is the MailingAddress. When we want to update the mailing address, let's say zipcode was wrong, following Mr. Evans doctrine, we must replace the old object for a new one since it's immutable (a value object right?). But we don't want to delete the row thou, because that address's PK is a FK in a MailingHistory table. So, following Mr. Evans doctrine, we are pretty much screwed here. Unless i make my addressses Entities, so i don't have to "replace" it, and simply update its zipcode member, like the old good days. What would you suggest me in this case? The way i see it, ValueObjects are only useful when you want to encapsulate a group of database table's columns (component in nhibernate). Everything that has a persistence id in the database, is better off to make it an Entity (not necessarily an aggregate root) so you can update its members without recreating the whole object graph, specially if that's a deep-nested object. Do you concur? Is it allowed by Mr. Evans to have a mutable value object? Or is a mutable value object a candidate for an Entity? Thanks

    Read the article

  • Is it possible to keep nm-applet running between invocations of WM startup?

    - by serverninja
    I am using nm-applet to interface with NetworkManager, running xmonad as a window manager. My X sessions (including nm-applet) are set up with a /usr/local/bin/xmonad.start script. My question is, how can I keep nm-applet running in the background as long as X is running, but not necessarily xmonad? As mentioned above, it is being started with xmonad (and dying with it when xmonad is restarted, etc). I am using gdm to manage my X sessions, and I'm running 10.10. Where's a good place to start nm-applet to suit my particular needs? I need to remove it from the control of xmonad, but don't know where to start it otherwise. Any help, tips, etc appreciated. Edit: problem seems to be with how I have integrated xmonad. I have the session script as a file in /usr/share/xsessions/xmonad.desktop with the following contents: [Desktop Entry] Encoding=UTF-8 Name=XMonad Comment=Lightweight tiling window manager Exec=/usr/local/bin/xmonad.start Icon=xmonad.png Type=XSession /usr/local/bin/xmonad.start contains the following: #!/bin/bash xrdb -merge ~/.Xresources xcompmgr -c & trayer --edge top --align right --SetDockType true --SetPartialStrut true --expand true --width 8 --heighttype pixel --height 18 --transparent true --alpha 0 --tint 0x000000 & gnome-settings-daemon & gnome-screensaver & if [ -x /usr/bin/nm-applet ] ; then nm-applet --sm-disable & fi /usr/bin/urxvtd -q -o -f & eval `ssh-agent` & if [ -x /usr/bin/gnome-power-manager ] ; then sleep 1 gnome-power-manager & fi /usr/bin/gnome-volume-control-applet & exec xmonad The question is how do I integrate xmonad, gdm, X, etc in such a manner to replicate the behavior I currently have except with nm-applet (and possibly other programs) running whether or not xmonad is?

    Read the article

  • Remote server's x menus without vino, vnc etc

    - by Fredde
    A question where both google searches, as well as askubuntu and ubuntu forums searches fails though rephrasing the question a number of times. Have a Lubuntu server with some storage and functions and a lubuntu laptop. Previous when running winXP, I had Xming, could start a x-session on the server, got the lxpanel on the laptop, switching, running x-programs without a hitch though the lxpanel menu. A very neat and convenient solution. However the winXP crashed, me moving to lubuntu also for the laptop. still things work, I ssh into the server and can start x-programs without a hitch. But, as with all graphical desktops, I at times need access to the lx menus on the server to find programs and here the problems arise, most finding I got talk about installing VNC, vino etc overkills, avoiding existing X-integration between the servers. I'll like to do as I did with Xming, see the menu system on the server, in my "client's" xwin? Just to see the servers installed software without colliding with the laptop X-server, just using it as normal X-apps.

    Read the article

  • Website Hosting/Registration [closed]

    - by Ricko M
    Possible Duplicate: How to find web hosting that meets my requirements? I am planning to launch a website down soon. I wanted to know what solutions are available for hosting and registration. Starting with domain registration. Any site you have used/preferred ? I am considering either godaddy or 123reg. Does it even make any difference which you choose? Is there any fine print i need to worry about. I am based in UK , not sure if that helps in resolving any issues if encountered. Does my hosting need to be done at the site i purchased my registration? If not , will there be any transfer fees if i change my hosting? Can I just register the name now and worry about hosting later? At the moment, I plan to have it up and running using either some sort of a tool or a template and perhaps put the bells and whistles down the line. I understand 123 has its own builder tool available, There are a few solutions suggested like wordpress,drupal & jhoomla... I am a C++ developer , not a web programmer, but I do feel the need to open the hood up and make changes if i see fit. So I guess I am looking for a solution where I can easily drag-drop widgets I need and when the time comes customize it. Which CMS would you recommend. Extras: What extras do you need to get , I was suggested to get hold of whois privacy to keep the spambots away, anything else you guys would recommend I keep my eyes open before I sign the dotted line.

    Read the article

  • I am trying to figure out the best way to understand how to cache domain objects

    - by Brett Ryan
    I've always done this wrong, I'm sure a lot of others have too, hold a reference via a map and write through to DB etc.. I need to do this right, and I just don't know how to go about it. I know how I want my objects to be cached but not sure on how to achieve it. What complicates things is that I need to do this for a legacy system where the DB can change without notice to my application. So in the context of a web application, let's say I have a WidgetService which has several methods: Widget getWidget(); Collection<Widget> getAllWidgets(); Collection<Widget> getWidgetsByCategory(String categoryCode); Collection<Widget> getWidgetsByContainer(Integer parentContainer); Collection<Widget> getWidgetsByStatus(String status); Given this, I could decide to cache by method signature, i.e. getWidgetsByCategory("AA") would have a single cache entry, or I could cache widgets individually, which would be difficult I believe; OR, a call to any method would then first cache ALL widgets with a call to getAllWidgets() but getAllWidgets() would produce caches that match all the keys for the other method invocations. For example, take the following untested theoretical code. Collection<Widget> getAllWidgets() { Entity entity = cache.get("ALL_WIDGETS"); Collection<Widget> res; if (entity == null) { res = loadCache(); } else { res = (Collection<Widget>) entity.getValue(); } return res } Collection<Widget> loadCache() { // Get widgets from underlying DB Collection<Widget> res = db.getAllWidgets(); cache.put("ALL_WIDGETS", res); Map<String, List<Widget>> byCat = new HashMap<>(); for (Widget w : res) { // cache by different types of method calls, i.e. by category if (!byCat.containsKey(widget.getCategory()) { byCat.put(widget.getCategory(), new ArrayList<Widget>); } byCat.get(widget.getCatgory(), widget); } cacheCategories(byCat); return res; } Collection<Widget> getWidgetsByCategory(String categoryCode) { CategoryCacheKey key = new CategoryCacheKey(categoryCode); Entity ent = cache.get(key); if (entity == null) { loadCache(); } ent = cache.get(key); return ent == null ? Collections.emptyList() : (Collection<Widget>)ent.getValue(); } NOTE: I have not worked with a cache manager, the above code illustrates cache as some object that may hold caches by key/value pairs, though it's not modelled on any specific implementation. Using this I have the benefit of being able to cache all objects in the different ways they will be called with only single objects on the heap, whereas if I were to cache the method call invocation via say Spring It would (I believe) cache multiple copies of the objects. I really wish to try and understand the best ways to cache domain objects before I go down the wrong path and make it harder for myself later. I have read the documentation on the Ehcache website and found various articles of interest, but nothing to give a good solid technique. Since I'm working with an ERP system, some DB calls are very complicated, not that the DB is slow, but the business representation of the domain objects makes it very clumsy, coupled with the fact that there are actually 11 different DB's where information can be contained that this application is consolidating in a single view, this makes caching quite important.

    Read the article

  • How to prevent Network Manager from auto creating network connection profiles with "available to everyone" by default

    - by airtonix
    We have several laptops at work which use Ubuntu 11.10 64bit. I have our Wifi Access Point requiring WPA2-EAP Authentication (backed by a LDAP server). I have the staff using these laptops when doing presentations by using the Guest Account. So by default when you have a wifi card, network manager will display available Wireless Access Points. So the logical course of action for a Novice(tm) user is to single left click the easy to use option in the Network Manager drop down list... At this point the Staff Member (who is logged in with the guest account) expects to just be able to connect and enter any authentication details if required. But because they are using the Guest account, they won't ever have admin permissions (nor do I want them to), and so PolKit kicks in with a request for admin authorisation. I solved this part by modifying the PolKit permissions required to allow all users to create System Network Connections... However, because these Staff members are logging onto the Wifi Access Point with Ldap Credentials and because the Network Manager is now saving those credentials as a System Connection, their password is available for the next guest user session (because system connection profiles are stored in /etc/NetworkManager/system-connections.d/* ). It creates system connections by default because "Available to all users" is ticked by default when you quickly connect to a new wifi access point. I want Network Manager to not tick this by default. This way I can revert the changes I made to Polkit and users network connection profiles will be purged when they log out.

    Read the article

  • Can't Log Into Ubuntu 12.04

    - by Razick
    Yesterday, after turning on Ubuntu, I logged into a Gnome session. A few minutes later, I tried switching to Unity for a change. Unfortunately, the background and my desktop icons loaded, but the system bar and launcher failed to load even after several minutes. Unity had always worked fine for me. I then tried the guest account, and it worked fine on both Unity and Gnome. However, the problem with my account got worse; I couldn't log into any desktop at all anymore. I would type in my password and press enter and it would just sit there doing nothing. The computer no longer responded in any way, so I had to hold the power button and reboot. The same problem happened repeatedly. Earlier today, I tried to get on again. I found that I hadthe same problem, when I tried to log in, the computer no longer locked up, but instead flashed a black screen with theconsole output and what seemed to be an error message before returning to the log in screen. It was to quick for me to read, about 1/4-1/2 second. I'd really appreciate some help as I have some important files that are not backed up yet. I can't transfer the files to a new account, or even make a new account because I tried taking the password off my account so now I can't authenticate from the guest to perform root functions. I'd really appreciate some help as I have some important files that are not backed up yet. Thanks.

    Read the article

  • Sharing a session between vBulletin forum and status.net microblogging platform

    - by jaz
    Hello, I need to integrate vBulletin 4.0.3 Publishing Suite with status.net microblogging platform. The first thing I need to do is make these 2 to share 1 session so a user logged in vBulletin forums will also be logged in to status.net and vice versa. I have installed different vBulletin components under different subdomains: forums.sample.com - vBulletin forums blogs.sample.com - vBulletin blogs sample.com - vBulletin content management All of these point to the same place (.../public_html/index.php) which includes the respective php file (content.php for sample.com | blog.php for blogs.sample.com | forum.php for forums.sample.com) depending on the $_SERVER['HTTP_HOST'] I have configured vBulletin to use a single cookie.domain (.sample.com) for all of these 3 domains so visiting different domains doesn't break the session. I also have status.sample.com, which is the subdomain where status.net is installed. The subdomain configuration is different so the document_root is actually a subfolder (.../public_html/status/) in sample.com Now, can you please give me some pointers on how to make all these subdomains share a single session? I'm not sure if it helps, but as I understand, status.net does no custom session handling by default, but it is possible to turn it on so it will start storing session data in a database table called "session". Any tips will be appreciated. Thank you.

    Read the article

  • How to find if an Oracle APEX session is expired

    - by Mathieu Longtin
    I have created a single-sign-on system for our Oracle APEX applications, roughly based on this tutorial: http://www.oracle.com/technology/oramag/oracle/09-may/o39security.html The only difference is that my master SSO login is in Perl, rather than another APEX app. It sets an SSO cookie, and the app can check if it's valid with a database procedure. I have noticed that when I arrive in the morning, the whole system doesn't work. I reload a page from the APEX app, it then sends me to the SSO page because the session was expired, I logon, and get redirected back to my original APEX app page. This usually works except first thing in the morning. It seems the APEX session is expired. In that case it seems to find the session, but then refuse to use it, and sends me back to the login page. I've tried my best to trace the problem. The "wwv_flow_custom_auth_std.is_session_valid" function returns true, so I'm assuming the session is valid. But nothing works until I remove the APEX session cookie. Then I can log back in easily. Anybody knows if there is another call that would tell me if the session is expired or not? Thanks

    Read the article

  • How does Visual Studio decide the order in which stack variables should be allocated?

    - by Jason
    I'm trying to turn some of the programs in gera's Insecure Programming by example into client/server applications that could be used in capture the flag scenarios to teach exploit development. The problem I'm having is that I'm not sure how Visual Studio (I'm using 2005 Professional Edition) decides where to allocate variables on the stack. When I compile and run example 1: int main() { int cookie; char buf[80]; printf("buf: %08x cookie: %08x\n", &buf, &cookie); gets(buf); if (cookie == 0x41424344) printf("you win!\n"); } I get the following result: buf: 0012ff14 cookie: 0012ff64 buf starts at an address eighty bytes lower than cookie, and any four bytes that are copied in buf after the first eighty will appear in cookie. The problem I'm having is when I place this code in some other function. When I compile and run the following code, I get a different result: buf appears at an address greater than cookie's. void ClientSocketHandler(SOCKET cs){ int cookie; char buf[80]; char stringToSend[160]; int numBytesRecved; int totalNumBytes; sprintf(stringToSend,"buf: %08x cookie: %08x\n",&buf,&cookie); send(cs,stringToSend,strlen(stringToSend),NULL); The result is: buf: 0012fd00 cookie: 0012fcfc Now there is no way to set cookie to arbitrary data via overwriting buf. Is there any way to tell Visual Studio to allocate cookie before buf? Is there any way to tell beforehand how the variables will be allocated? Thanks, Jason

    Read the article

  • How to stop tcpdump remotely using expect from a new telnet session

    - by The CodeWriter
    I am trying to stop the tcpdump command from running on a remote terminal. If I telnet to the terminal, start tcpdump, and then send a ^c, tcpdump stops with no issues. However if I telnet to the same terminal, start tcpdump, and then exit the telnet session, when I reconnect to the same telnet session I am unable to stop tcpdump via a ^c. When I do this instead of stopping tcpdump it seems that it just quits the telnet session and tcpdump continues to run on the remote terminal. I provided my script below. Any help is greatly appreciated. #!/usr/local/bin/expect -f exp_internal 1 set timeout 30 spawn /bin/bash expect "] " send "telnet 192.168.62.133 10006\r" expect "Escape character is '^]'." send "\r" expect "# " set now [clock format [clock seconds] -format {%d_%b_%Y_%H%M%S}] set command "tcpdump -vv -i trf400 ip proto 89 -s 65535 -w /tmp/test_term420_${now}.pcp " send "$command\r" expect "tcpdump: listening on" # This works correctly. tcpdump quits and I am returned to the expected prompt send "\x03" expect "# " send "$command\r" expect "tcpdump: listening on" # Exit telnet session send -- "\x1d" expect "telnet> " send -- "q\r" expect "] " # Reconnect to telnet session send "telnet 192.168.62.133 10006\r" expect "Escape character is '^]'." send "\r" # This does not work as intended. The ^c quits the telnet session instead of stopping tcpdump send "\x03" expect "] " send "ls\r" expect "] "

    Read the article

  • lost session after redirect_to

    - by PeterWong
    I encountered a strange performance in my current project, which is about session. The strange part is it was normal in Safari but failed in other browsers (includes chrome, firefox and opera). There is a registration form for input of part of the key information (email, password, etc) and is submitted to an action called "create" This is the basic code of create action: @account = Account.new(params[:account]) if @account.save ApplicationController.current_account = @account session[:current_account] = ApplicationController.current_account session[:account] = ApplicationController.current_account.id email = @account.email Mailer.deliver_account_confirmation(email) flash[:type] = "success" flash[:notice] = "Successfully Created Account" redirect_to :controller => "accounts", :action => "create_step_2" else flash[:type] = "error" flash[:title] = "Oops, something wasn't right." flash[:notice] = "Mistakes are marked below in red. Please fix them and resubmit the form. Thanks." render :action => "new" end Also I created a before_filter in the application controller, which has the following code: ApplicationController.current_account = Account.find_by_id(session[:current_account].id) unless session[:current_account].blank? For Safari, there is no any problem. But for the other browsers, the session[:current_account] does not exist and so produced the following error message: RuntimeError in AccountsController#create_step_2 Called id for nil, which would mistakenly be 4 -- if you really wanted the id of nil, use object_id Please could anyone help me?

    Read the article

  • Java Hibernate session delete of object

    - by user2535201
    I'm really struggling with hibernate sessions, I never have the result I expect when making a query on a modified session object. I think all my problems are related. The last one is the following : final Session iSession = AbstractDAO.getSessionFactory().openSession(); try { iSession.beginTransaction(); MyObject iObject = DAOMyObject.getInstance().get(iSession,ObjectId); iObject.setQuantity(0); //previously the quantity was different from zero DAOMyObject.getInstance().update(iSession,iObject); DAOMyObject.getInstance().deleteObjectWithZeroQuantities(iSession); iSession.getTransaction().commit(); } catch (final Exception aException) { iSession.getTransaction().rollback(); logger.error(aException.getMessage(), aException); throw aException; } finally { iSession.close(); } What I'm not getting is why the object is not deleted, since I'm modified it in the session, the query making the delete should find it. I had the same problem with creating an object with an incremental id in a session, then creating another one in the same session before the commit, with a select max(id)+1. But the session gets me the same number of id every time.

    Read the article

  • jQuery/Javascript Cookies and variable returning with value [object Object]

    - by user1706661
    I am attempting to set a cookie to a site using jQuery, ONLY if the user came from a specific site. In this case, lets use -http://referrersite.com- as the site they must come from for the cookie to be created as an example. The cookie value is being stored in a variable and everything up to this point is working fine. There is a conditional statement checking whether the user came from the referred site, if the cookie exists already and if the cookie doesn't exist and the user did not come from the referred site. If the user came from the referred site the cookie is created and stored in a variable. If the cookie already exists, it is then stored in a variable. If the cookie does not exist and the user did not come from the referred site I am assigning the variable a static string of characters - this is where the issue lies. When the variable is alerted from the non referred site and no existing cookie, it returns: [object Object], not the static string of characters. The code I am using is below: $(document).ready(function() { var referrer = document.referrer; if(referrer == "http://referrersite.com") { $.cookie("code","123456", { expires: 90, path: '/' }); cookieContainer = $.cookie("code"); alert(cookieContainer); } else if($.cookie("code")) { cookieContainer = $.cookie("code"); alert(cookieContainer); } else if($.cookie("code") == null && referrer != "http://referrersite.com") { cookieContainer = "67890"; alert(cookieContainer); } }); Please let me know if there is something I am missing as the code to me looks like it should work. Thanks!

    Read the article

  • Active Directory - Lightweight Directory Services and Domain Password Policy

    - by Craig Beuker
    Greetings all, We have an active directory domain which enforces a strict password policy. Hooray! Now, for the project we are working on, we are going to be storing users of our website Microsoft's AD-LDS service as well as using that for authentication of our web users. By default, it is my understanding that AD-LDS inherits its password policy from the domain of the machine it's installed on. Is there any way to break that link such that we can define a lighter password policy (or none if we so choose) for users in AD-LDS without affecting our domain? Note: AD-LDS is going to be hosted on a machine which is part of the domain. Thanks in advance.

    Read the article

  • Cannot receive email outside domain with Microsoft Exchange

    - by Adi
    This morning we couldn't receive email from outside our company's domain (domain.com.au), but we can send email to outside (e.g hotmail, gmail, yahoo). When I tried to send email to my work email address using my gmail account, I received this message Technical details of permanent failure: Google tried to deliver your message, but it was rejected by the recipient domain. We recommend contacting the other email provider for further information about the cause of this error. The error that the other server returned was: 550 550 Unable to relay for [email protected] (state 14). I tried using telnet to send email, and it works. But still I couldn't figure out why I can't receive email from outside. I'm not sure if I provided enough info. I'll try to provide as much info as needed to help me solve the problem. Thanks

    Read the article

  • Giving Select Windows Domain Users Symbolic Link Privilege

    - by fp0n
    I would like to setup select users on our domain to have the ability to create symbolic links on local NTFS drives and network shares without needing to run as Administrator, as part of an application with will call the CreateSymbolicLink() API directly. The default configuration for our users is to be Administrator of their computer and I think I am fighting UAC to make the privileges work the way that I want because of that. I found this link on MSDN: http://social.msdn.microsoft.com/Forums/en-SG/windowssdk/thread/fa504848-a5ea-4e84-99b7-0eb4e469cbef which describes the interaction between the SeCreateSymbolicLinkPrivilege, UAC and a domain but really does not have a solution. Here's the three options I've come up with: 1) Create a new group, give the SeCreateSymbolicLinkPrivilege to the group and assign users to the group 2) Give each individual user (2 now, more later) the privilege 3) Give the privilege to the default User group which opens it up to all Users 4) Change config so Users are not Admins by default (probably would work but not likely) Based on my testing, only 3 works for me and that is the least desirable but I've only got a local server to test with, not a domain. I need to recommend to the admin how to set this up and also have something that we can easily explain to other users of our application that are on their own domain or not on a domain. The other option seems to be to create a Service that runs with a SYSTEM account that creates the links for the application but I'd rather not go that route. Thanks.

    Read the article

  • How to config Amazon Route53 working without www in sub-domain

    - by romuloigor
    edit: Amazon now supports this. http://aws.typepad.com/aws/2012/12/root-domain-website-hosting-for-amazon-s3.html I have my domain config in Route53 at Amazon AWS exec ping command in my domain without www $ ping gabster.com.br ping: cannot resolve gabster.com.br: Unknown host exec ping command in my domain with www $ ping www.gabster.com.br PING s3-website-sa-east-1.amazonaws.com (177.72.245.6): 56 data bytes 64 bytes from 177.72.245.6: icmp_seq=0 ttl=244 time=25.027 ms 64 bytes from 177.72.245.6: icmp_seq=1 ttl=244 time=25.238 ms 64 bytes from 177.72.245.6: icmp_seq=2 ttl=244 time=25.024 ms Route 53 - Create Record Set - Name: [ ].gabster.com.br Set CNAME value: www.gabster.com.br DISPLAY ERROR "RRSet of type CNAME with DNS name mydomin.com is not permitted at apex in zone mydomin.com"

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >