Search Results

Search found 721 results on 29 pages for 'uuid'.

Page 15/29 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • error : java.lang.String cannot be cast to coldfusion.cfc.CFCBeanProxy anyone know about this ?

    - by faheem
    Hi does anyone know about this error I get when I try to insert a foreign key value in my entry entity using cf9 Hibernate ? java.lang.ClassCastException: java.lang.String cannot be cast to coldfusion.cfc.CFCBeanProxy Root cause :org.hibernate.HibernateException: java.lang.ClassCastException: java.lang.String cannot be cast to coldfusion.cfc.CFCBeanProxy Below is the code for my entity object and then for my user object.. Is there anything wrong with this ? entry.cfc /** * Entries Object */ component output="false" persistent="true"{ property name="entry_id" fieldType="id" generator="uuid"; property name="entryBody" ormType="text"; property name="title" notnull="true" type="string"; property name="time" fieldtype="timestamp"; property name="isCompleted" ormType="boolean" dbdefault="0" default="false"; property name="userID" fieldtype="many-to-one" fkcolumn="userID" cfc="user"; Entry function init() output=false{ return this; } } user.cfc /** * Users Object */ component output="false" persistent="true"{ property name="userID" fieldType="id" generator="uuid"; property name="firstName" notnull="true" type="string"; property name="lastName" notnull="true" type="string"; property name="password" notnull="true" type="string"; property name="userType" notnull="true" type="string"; //property name="entry" fieldtype="one-to-many" type="array" fkcolumn="userID" cfc="entry"; User function init() output=false{ return this; } }

    Read the article

  • WTK emulator bluetooth connection problem

    - by Gokhan B.
    Hi! I'm developing a J2ME program with eclipse / WTK 2.5.2 and having problem with connecting two emulators using bluetooth. There is one server and one .client running on two different emulators. The problem is client program cannot discover any bluetooth device. Here is the server and client codes: public Server() { try { LocalDevice local = LocalDevice.getLocalDevice(); local.setDiscoverable(DiscoveryAgent.GIAC); server = (StreamConnectionNotifier) Connector.open("btspp://localhost:" + UUID_STRING + ";name=" + SERVICE_NAME); Util.Log("EchoServer() Server connector open!"); } catch (Exception e) {} } after calling Connector.open, I get following warning in console, which i believe is related: Warning: Unregistered device: unspecified and client code that searches for devices: public SearchForDevices(String uuid, String nm) { UUIDStr = uuid; srchServiceName = nm; try { LocalDevice local = LocalDevice.getLocalDevice(); agent = local.getDiscoveryAgent(); deviceList = new Vector(); agent.startInquiry(DiscoveryAgent.GIAC, this); // non-blocking } catch (Exception e) {} } system never calls deviceDiscovered, but calls inquiryCompleted() with INQUIRY_COMPLETED paramter, so I suppose client program runs fine. Bluetooth is enabled at emulator settings.. any ideas ?

    Read the article

  • SvnDumpFilter 2,3: Error parsing header. How to fix?

    - by flashnik
    I use SVN from Collabnet, 1.6.9 version. I tried to separate my project on two parts with SvnDumpFilter and encountered an error because some folders (includeing the one I want to separate) had been moved. Then I googled that SvnDumpFilter 2 and 3 can solve this problem. I tried to use them but in both cases encountered an error: Error parsing header. Here is the beginning of source dump: SVN-fs-dump-format-version: 2 UUID: REP_GUID Revision-number: 0 Prop-content-length: 56 Content-length: 56 K 8 svn:date V 27 2008-10-28T07:01:45.445155Z PROPS-END Revision-number: 1 Prop-content-length: 151 Content-length: 151 K 7 svn:log V 48 ?±???·???µ??-?‡?°???‚??, ?????‚?????°?? ?±?‹?»?° K 10 svn:author V 8 flashnik K 8 svn:date V 27 2008-10-29T20:18:56.633888Z PROPS-END Node-path: Foo Node-kind: dir Node-action: add Prop-content-length: 10 Content-length: 10 PROPS-END Node-path: Foo/Bar Node-kind: dir Node-action: add Prop-content-length: 10 Content-length: 10 PROPS-END Node-path: Foo/Bar/example.doc Node-kind: file Node-action: add Prop-content-length: 59 Text-content-length: 181248 Text-content-md5: f14c77a031ab2de001ac5239427ceded Text-content-sha1: 95470e8d29bf76b00485c4fa33f4029f5c2386cb Content-length: 181307 K 13 svn:mime-type V 24 application/octet-stream ....Some binary code and so on SvnDumpFilter3 produces following part before dying: SVN-fs-dump-format-version: 2 UUID: REP_GUID Revision-number: 0 Prop-content-length: 56 Content-length: 56 K 8 svn:date V 27 2008-10-28T07:01:45.445155Z PROPS-END Revision-number: 1 Prop-content-length: 151 Content-length: 151 K 7 svn:log V 48 ?±???·???µ??-?‡?°???‚??, ?????‚?????°?? ?±?‹?»?° K 10 svn:author V 8 flashnik K 8 svn:date V 27 2008-10-29T20:18:56.633888Z PROPS-END What's wrong? How to fix it? Does it work with my subversion version?

    Read the article

  • Linking problems using libcurl with Visual C++ 2005: "unresolved external symbol __imp__curl_easy_se

    - by user88595
    Hi, I am planning to use libcurl in my project. I had downloaded the library source,built and integrated it in a small POC application. I am able to build and run the application without any issues with the generated libcurl.dll and libcurl_imp.lib files. Now when I integrate the same library in my project I am getting linker errors. 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_setopt 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_perform 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_cleanup 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_global_init 6foo.obj : error LNK2001: unresolved external symbol _imp_curl_easy_init I have researched and tried all manners of workarounds like adding CURL_STATICLIB definitions , additional libraries , changing to /MT even copying the libs to the release directory but nothing seems to work. As far as I can see the only difference between approach #1 and #2 in my steps are #1 is an console application using the libcurl.dll while in my main project this is another dll which is trying to link to libcurl.dll.. Would that necessitate any change in approach? Can I use the same generated multi threaded DLL /MD file for both(Tried /MT also with no success)? Any other ideas? Following are the linker options. -------------------------------------------------Working------------------------------------------------- /OUT:"C:\SampleFTP\Release\SampleFTP.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\SampleFTP\SampleFTP\Release" /MANIFEST /MANIFESTFILE:"Release\SampleFTP.exe.intermediate.manifest" /DEBUG /PDB:"c:\SampleFTP\release\SampleFTP.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /LTCG /MACHINE:X86 /ERRORREPORT:PROMPT libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib -------------------------------------------------Working------------------------------------------------- ----------------------------------------------NotWorking------------------------------------------------- /OUT:".......\nt\Win32\Release/foo__tests.dll" /INCREMENTAL:NO /NOLOGO /LIBPATH:"C:\FullLibPath\libcurl_libs" /LIBPATH:"......\nt\Win32\Release" /DLL /MANIFEST /MANIFESTFILE:".\foo_tests\Win32\Release\foo_tests.dll.intermediate.manifest" /DEBUG /PDB:".......\nt\Win32\Release/foo_tests.pdb" /OPT:REF /OPT:ICF /LTCG /IMPLIB:".......\nt\Win32\Release/foo_tests.lib" /MACHINE:X86 /ERRORREPORT:PROMPT odbc32.lib odbccp32.lib util_process.lib wsock32.lib Version.lib libcurl_imp.lib kernel32.lib user32.lib gdi32.lib winspool.lib comdlg32.lib advapi32.lib shell32.lib ole32.lib oleaut32.lib uuid.lib odbc32.lib odbccp32.lib "......\nt\win32\release\otherlib1.lib" "......\nt\win32\release\otherlib2.lib" ----------------------------------------------NotWorking-------------------------------------------------

    Read the article

  • Umlaute from JSP-page are misinterpreted

    - by Karin
    I'm getting Input from a JSP page, that can contain Umlaute. (e.g. Ä,Ö,Ü,ä,ö,ü,ß). Whenever an Umlaut is entered in the Input field an incorrect value gets passed on. e.g. If an "ä" (UTF-8:U+00E4) gets entered in the input field, the String that is extracted from the argument is "ä" (UTF-8: U+00C3 and U+00A4) It seems to me as if the UTF-8 hex encoding (which is c3 a4 for an "ä") gets used for the conversion. How can I retrieved the correct value? Here are snippets from the current implementation The JSP-page passes the input value "pk" on to the processing logic: <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> ... <input type="text" name="<%=pk.toString()%>" value="<%=value%>" size="70"/> <button type="submit" title="Edit" value='Save' onclick="action.value='doSave';pk.value='<%=pk.toString()%>'"><img src="icons/run.png"/>Save</button> The value gets retrieved from args and converted to a string: UUID pk = UUID.fromString(args.get("pk")); //$NON-NLS-1$ String value = args.get(pk.toString()); Note: Umlaute that are saved in the Database get displayed correctly on the page.

    Read the article

  • How should I configure grub for booting linux kernel from a USB hard drive?

    - by skolima
    I have a laptop hard drive in an external enclosure which I use as a large pendrive. For an added twist, I have installed Linux on it, so I can boot any machine with my distribution of choice (e.g. for data recovery or repairing a b0rked system or just using a borrowed laptop without destroying the preinstalled Windows). The problem is that, depending on the hardware configuration, the USB hard drive may be visible under different paths. For grub configuration I just use (hda0,0) as it is relative to the device the grub was launched from. I have UUID entries in /etc/fstab. I also specify rootwait in the kernel parameters so that it waits for the USB subsystem to settle down before trying to mount the device. What should I pass to the kernel as root= ? Currently boot from the pendrive once, check the debug messages to see what /dev/sdX device has been assigned to the USB drive by the kernel, then reboot and edit the grub configuration. I can't change anything on the PC besides enabling Boot from USB hard drive in BIOS and setting it to higher priority than internal hard drives. There are various initrd generating scripts which include support for UUID in root device path, unfortunately the Gentoo native one (genkernel) does not support rootwait and I had no luck trying to use others. The boot process goes like this (it is quite similar in Windows): The BIOS chooses the boot device and loads whatever is its MBR (which happens to be grub stage-1). Grub loads it's configuration and stage-2 files from device it has set as root, using (hd0) for the device it was loaded from by BIOS. Grub loads and starts a kernel (still the same numbering, so I can use (hd0,0) again ). Kernel initializes all built-in devices (rootwait does it's magic now). Kernel mounts the partition it was passed as root (this is a kernel parameter, not grub parameter). init.d starts the userland booting process, including mounting things from /etc/fstab. Part 5 is the one giving me problems.

    Read the article

  • JasperReports reports access image via authenticated URL

    - by user363115
    Hi all, I am hoping that this is a simple issue with a simple solution and that I have missed something obvious. Let me explain the problem; We have an application that generates PDF reports (using Jasper). These reports contain data from our database, as well as imagery (photographs). These photographs are stored in S3. We use signed URLs to access these photographs. We link these photographs into our Jasper reports using these S3 URLs. Because the S3 URLs are signed and time-limited (by design), the process is as follows; User requests a report to be generated, Report is filled, and goes to our database (at which time UUIDs to any required images are retrieved), For each UUID an S3 signed URL must be generated, To do this the URL behind each report image is a call to an authenticated URL in our app (/get_img?uuid=foo), The controller behind this URL generates a signed S3 URL and returns it, Reports loads the image. The problem is with step (4) - the call to the authenticated URL fails because Jasper does not pass any authentication information with the request. Is there a solution here? Thanks all for your time. Ben

    Read the article

  • CherryPy always returning HTTP 200 [closed]

    - by DarkArctic
    I'm having a bit of a problem when browsing to a non-existent resource. I get a response code of 200 instead of 404. I'm using the MethodDispatcher and I have a class that overloads the __getattr__ method to instantiate a resource if a child exists or to return AttributeError if one doesn't. My class is always returning the AttributeError correctly, but the data I actually get is always from the last good resource. Here's a simplified (except for __getattr__) version of my class: class BaseResource(object): exposed = True def __init__(self, name): self.children = [] # Pretend this has child resources def __getattr__(self, name): if name in self._children: uuid, application, obj_type, server = self._children[name] try: resource = getattr(app[application], obj_type) except AttributeError as e: raise cherrypy.HTTPError(500, e) return resource(uuid) else: raise AttributeError('Child with name \'{}\' could not be found.'.format(name)) def GET(self): cherrypy.log.error('*** {} not found, raising AttributeError'.format(name)) return 'GET request for {}'.format(self._name) So fetching I get the following when I browse to the following resources: http://localhost:8000/users - This resource exists, so it returns it correctly. http://localhost:8000/users/fake - This returns the "users" resource giving an HTTP 200. http://localhost:8000/users/fake/reallyfake - This returns the "users" resource again. So my question is, where can I start looking to find out why my code isn't returning a 404 for a non-existent resource. I'm sure I've done something wrong, but I'm not sure what. Whatever I did wrong I've undone and I'm now getting a 404 returned correctly. I'm sorry I can't give any detail on what the issue was, but I'm honestly not sure what I did.

    Read the article

  • Immutability and shared references - how to reconcile?

    - by davetron5000
    Consider this simplified application domain: Criminal Investigative database Person is anyone involved in an investigation Report is a bit of info that is part of an investigation A Report references a primary Person (the subject of an investigation) A Report has accomplices who are secondarily related (and could certainly be primary in other investigations or reports These classes have ids that are used to store them in a database, since their info can change over time (e.g. we might find new aliases for a person, or add persons of interest to a report) If these are stored in some sort of database and I wish to use immutable objects, there seems to be an issue regarding state and referencing. Supposing that I change some meta-data about a Person. Since my Person objects immutable, I might have some code like: class Person( val id:UUID, val aliases:List[String], val reports:List[Report]) { def addAlias(name:String) = new Person(id,name :: aliases,reports) } So that my Person with a new alias becomes a new object, also immutable. If a Report refers to that person, but the alias was changed elsewhere in the system, my Report now refers to the "old" person, i.e. the person without the new alias. Similarly, I might have: class Report(val id:UUID, val content:String) { /** Adding more info to our report */ def updateContent(newContent:String) = new Report(id,newContent) } Since these objects don't know who refers to them, it's not clear to me how to let all the "referrers" know that there is a new object available representing the most recent state. This could be done by having all objects "refresh" from a central data store and all operations that create new, updated, objects store to the central data store, but this feels like a cheesy reimplementation of the underlying language's referencing. i.e. it would be more clear to just make these "secondary storable objects" mutable. So, if I add an alias to a Person, all referrers see the new value without doing anything. How is this dealt with when we want to avoid mutability, or is this a case where immutability is not helpful?

    Read the article

  • How to get a checkout-able revision info from subversion?

    - by zhongshu
    I want to check a svn url and to get the latest revision, then checkout it, I don't want to use HEAD because I will compare the latest revision to others. so I use "svn info" to get the "Last Changed Rev" for the url like this: D:\Project>svn info svn://.../branches/.../path Path: ... URL: svn://.../branches/.../path Repository Root: svn://yt-file-srv/ Repository UUID: 9ed5ffd7-7585-a14e-96b2-4aab7121bb21 Revision: 2400 Node Kind: directory Last Changed Author: xxx Last Changed Rev: 2396 Last Changed Date: 2010-03-12 09:31:52 +0800 but, I found the 2396 revision is not checkout-able, because this path is in a branch copied from trunk, and the 2396 is the revision modified in the trunk. so when I use svn checkout -r 2396, I will get a workcopy for the path in the trunk, then I can not do checkin for the branch. D:\Project>svn checkout svn://.../branches/.../path -r 2396 workcopy ..... ..... D:\Project>svn info workcopy Path: workcopy URL: svn://.../trunk/.../path Repository Root: svn://yt-file-srv/ Repository UUID: 9ed5ffd7-7585-a14e-96b2-4aab7121bb21 Revision: 2396 Node Kind: directory Schedule: normal Last Changed Author: xxx Last Changed Rev: 2396 Last Changed Date: 2010-03-12 09:31:52 +0800 So, my question is how to get a checkout-able revision for the branch path, for this example, I want to get 2397 (because 2397 is the revision which copy occur). And I know "svn log" can get the info, but "svn log" output maybe very long and parse it will be difficult than "svn info". I just want know which revision is the latest checkout-able revision for the path.

    Read the article

  • How to speed up a slow UPDATE query

    - by Mike Christensen
    I have the following UPDATE query: UPDATE Indexer.Pages SET LastError=NULL where LastError is not null; Right now, this query takes about 93 minutes to complete. I'd like to find ways to make this a bit faster. The Indexer.Pages table has around 506,000 rows, and about 490,000 of them contain a value for LastError, so I doubt I can take advantage of any indexes here. The table (when uncompressed) has about 46 gigs of data in it, however the majority of that data is in a text field called html. I believe simply loading and unloading that many pages is causing the slowdown. One idea would be to make a new table with just the Id and the html field, and keep Indexer.Pages as small as possible. However, testing this theory would be a decent amount of work since I actually don't have the hard disk space to create a copy of the table. I'd have to copy it over to another machine, drop the table, then copy the data back which would probably take all evening. Ideas? I'm using Postgres 9.0.0. UPDATE: Here's the schema: CREATE TABLE indexer.pages ( id uuid NOT NULL, url character varying(1024) NOT NULL, firstcrawled timestamp with time zone NOT NULL, lastcrawled timestamp with time zone NOT NULL, recipeid uuid, html text NOT NULL, lasterror character varying(1024), missingings smallint, CONSTRAINT pages_pkey PRIMARY KEY (id ), CONSTRAINT indexer_pages_uniqueurl UNIQUE (url ) ); I also have two indexes: CREATE INDEX idx_indexer_pages_missingings ON indexer.pages USING btree (missingings ) WHERE missingings > 0; and CREATE INDEX idx_indexer_pages_null ON indexer.pages USING btree (recipeid ) WHERE NULL::boolean; There are no triggers on this table, and there is one other table that has a FK constraint on Pages.PageId.

    Read the article

  • SQL Server query

    - by carrot_programmer_3
    Hi, I have a SQL Server DB containing a registrations table that I need to plot on a graph over time. The issue is that I need to break this down by where the user registered from (e.g. website, wap site, or a mobile application). the resulting output data should look like this... [date] [num_reg_website] [num_reg_wap_site] [num_reg_mobileapp] 1 FEB 2010,24,35,64 2 FEB 2010,23,85,48 3 FEB 2010,29,37,79 etc... The source table is as follows... UUID(int), signupdate(datetime), requestsource(varchar(50)) some smple data in this table looks like this... 1001,2010-02-2:00:12:12,'website' 1002,2010-02-2:00:10:17,'app' 1003,2010-02-3:00:14:19,'website' 1004,2010-02-4:00:16:18,'wap' 1005,2010-02-4:00:18:16,'website' Running the following query returns one data column 'total registrations' for the website registrations but I'm not sure how to do this for multiple columns unfortunatly.... select CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME) as [signupdate], count(UUID) as 'total registrations' FROM [UserRegistrationRequests] WHERE requestsource = 'website' group by CAST(FLOOR(CAST([signupdate]AS FLOAT ))AS DATETIME)

    Read the article

  • How to get the earliest checkout-able revision info from subversion?

    - by zhongshu
    I want to check a svn url and to get the earliest revision, then checkout it, I don't want to use HEAD because I will compare the earliest revision to others. so I use "svn info" to get the "Last Changed Rev" for the url like this: D:\Project>svn info svn://.../branches/.../path Path: ... URL: svn://.../branches/.../path Repository Root: svn://yt-file-srv/ Repository UUID: 9ed5ffd7-7585-a14e-96b2-4aab7121bb21 Revision: 2400 Node Kind: directory Last Changed Author: xxx Last Changed Rev: 2396 Last Changed Date: 2010-03-12 09:31:52 +0800 but, I found the 2396 revision is not checkout-able, because this path is in a branch copied from trunk, and the 2396 is the revision modified in the trunk. so when I use svn checkout -r 2396, I will get a working copy for the path in the trunk, then I can not do checkin for the branch. D:\Project>svn checkout svn://.../branches/.../path -r 2396 workcopy ..... ..... D:\Project>svn info workcopy Path: workcopy URL: svn://.../trunk/.../path Repository Root: svn://yt-file-srv/ Repository UUID: 9ed5ffd7-7585-a14e-96b2-4aab7121bb21 Revision: 2396 Node Kind: directory Schedule: normal Last Changed Author: xxx Last Changed Rev: 2396 Last Changed Date: 2010-03-12 09:31:52 +0800 So, my question is how to get a checkout-able revision for the branch path, for this example, I want to get 2397 (because 2397 is the revision which copy occur). And I know "svn log" can get the info, but "svn log" output maybe very long and parse it will be difficult than "svn info". I just want know which revision is the earliest checkout-able revision for the path.

    Read the article

  • Update php 5.2.0 to 5.2.4 with aptitude

    - by Kiva
    Hi guy, I would like to update my php 5 in my server. At this moment, I use php 5.2.0 so I want to update it to php 5.2.4 (not php 5.3). I tried to do this: aptitude update aptitude upgrade 63 packets were updated but not php which is always in 5.0 How can I update my php please ? Here is the output of commands asked by David in another post: aptitude search php5 p libapache-mod-php5 - server-side, HTML-embedded scripting langu i A libapache2-mod-php5 - server-side, HTML-embedded scripting langu i php5 - server-side, HTML-embedded scripting langu p php5-apache2-mod-bt - PHP bindings for mod_bt p php5-auth-pam - A PHP5 extension for PAM authentication i php5-cgi - server-side, HTML-embedded scripting langu p php5-clamavlib - PHP ClamAV Lib - ClamAV Interface for PHP5 p php5-cli - command-line interpreter for the php5 scri i A php5-common - Common files for packages built from the p i php5-curl - CURL module for php5 p php5-dev - Files for PHP5 module development i A php5-gd - GD module for php5 p php5-idn - PHP api for the IDNA library p php5-imagick - ImageMagick module for php5 p php5-imap - IMAP module for php5 p php5-interbase - interbase/firebird module for php5 p php5-json - JSON serialiser for PHP5 p php5-ldap - LDAP module for php5 p php5-mapscript - module for php5-cgi to use mapserver p php5-maxdb - PHP extension to access MaxDB databases fo i A php5-mcrypt - MCrypt module for php5 p php5-memcache - memcache extension module for PHP5 p php5-mhash - MHASH module for php5 p php5-ming - Ming module for php5 i A php5-mysql - MySQL module for php5 p php5-odbc - ODBC module for php5 p php5-pgsql - PostgreSQL module for php5 p php5-ps - ps module for PHP 5 p php5-pspell - pspell module for php5 p php5-radius - PECL radius module for PHP 5 p php5-recode - recode module for php5 p php5-snmp - SNMP module for php5 p php5-sqlite - SQLite module for php5 p php5-sqlite3 - SQLite3 module for php5 p php5-sqlrelay - SQL Relay PHP API p php5-suhosin - advanced protection module for php5 p php5-sybase - Sybase / MS SQL Server module for php5 p php5-tidy - tidy module for php5 p php5-uuid - OSSP uuid module for php5 p php5-xapian - Xapian search engine interface for PHP5 p php5-xcache - Fast, stable PHP opcode cacher p php5-xmlrpc - XML-RPC module for php5 p php5-xsl - XSL module for php5 aptitude show php5 | grep Version Version : 5.2.0-8+etch13 aptitude show php5-cgi | grep Version Version : 5.2.0-8+etch13 php5 --version -bash: php5: command not found php-cgi --version PHP 5.2.0-8+etch13 (cgi-fcgi) (built: Oct 2 2008 08:21:17) Copyright (c) 1997-2006 The PHP Group Zend Engine v2.2.0, Copyright (c) 1998-2006 Zend Technologies

    Read the article

  • radvd is not assigning prefix

    - by Samik
    I'm currently trying to setup IPv6 address auto-configuration with router advertisement daemon (radvd) on a virtual machine running CentOS 6.5. But the eth0 interface is not obtaining that prefix. I've obtained the ULA prefix from here. Contents of /etc/sysctl.conf # Kernel sysctl configuration file for Red Hat Linux # # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and # sysctl.conf(5) for more details. # Controls IP packet forwarding net.ipv4.ip_forward = 0 net.ipv6.conf.all.forwarding = 1 # Controls source route verification net.ipv4.conf.default.rp_filter = 1 # Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 # Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 # Controls whether core dumps will append the PID to the core filename. # Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 # Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 # Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 # Controls the default maxmimum size of a mesage queue kernel.msgmnb = 65536 # Controls the maximum size of a message, in bytes kernel.msgmax = 65536 # Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 # Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 Contents of /etc/radvd.conf # NOTE: there is no such thing as a working "by-default" configuration file. # At least the prefix needs to be specified. Please consult the radvd.conf(5) # man page and/or /usr/share/doc/radvd-*/radvd.conf.example for help. # # interface eth0 { AdvSendAdvert on; MinRtrAdvInterval 3; MaxRtrAdvInterval 10; AdvDefaultPreference low; AdvHomeAgentFlag off; prefix fd8a:8d9d:808f:1::/64 { AdvOnLink on; AdvAutonomous on; AdvRouterAddr on; }; }; Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes I've also enabled radvd at startup through chkconfig. Though I noticed that radvd is starting after interfaces are brought up. I've tried restarting the network service afterwards but still I get the following link-local address only #ip -6 addr show 1: lo: mtu 16436 inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: mtu 1500 qlen 1000 inet6 fe80::5054:ff:fe74:d746/64 scope link valid_lft forever preferred_lft forever Edit: Based on the answer given by Sander Steffann I still need clarification on some points but I'm posting here what worked. Contents of /etc/sysconfig/network NETWORKING=yes HOSTNAME=syslog-ng-server NETWORKING_IPV6=yes IPV6FORWARDING=yes Contents of /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=52:54:00:74:d7:46 TYPE=Ethernet UUID=af5db1cb-e809-4098-be1a-5a74dbb767b1 ONBOOT=yes NM_CONTROLLED=no BOOTPROTO=dhcp IPV6INIT=yes IPV6_AUTOCONF=yes IPV6FORWARDING=no Removed following line from /etc/sysctl.conf net.ipv6.conf.all.forwarding = 1 Contents of /etc/radvd.conf is as previous.

    Read the article

  • LUKS with LVM, mount is not persistent after reboot

    - by linxsaga
    I have created a Logical vol and used luks to encrypt it. But while rebooting the server. I get a error message (below), therefore I would have to enter the root pass and disable the /etc/fstab entry. So mount of the LUKS partition is not persistent during reboot using LUKS. I have this setup on RHEL6 and wondering what i could be missing. I want to the LV to get be mount on reboot. Later I would want to replace it with UUID instead of the device name. Error message on reboot: "Give root password for maintenance (or type Control-D to continue):" Here are the steps from the beginning: [root@rhel6 ~]# pvcreate /dev/sdb Physical volume "/dev/sdb" successfully created [root@rhel6 ~]# vgcreate vg01 /dev/sdb Volume group "vg01" successfully created [root@rhel6 ~]# lvcreate --size 500M -n lvol1 vg01 Logical volume "lvol1" created [root@rhel6 ~]# lvdisplay --- Logical volume --- LV Name /dev/vg01/lvol1 VG Name vg01 LV UUID nX9DDe-ctqG-XCgO-2wcx-ddy4-i91Y-rZ5u91 LV Write Access read/write LV Status available # open 0 LV Size 500.00 MiB Current LE 125 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:0 [root@rhel6 ~]# cryptsetup luksFormat /dev/vg01/lvol1 WARNING! ======== This will overwrite data on /dev/vg01/lvol1 irrevocably. Are you sure? (Type uppercase yes): YES Enter LUKS passphrase: Verify passphrase: [root@rhel6 ~]# mkdir /house [root@rhel6 ~]# cryptsetup luksOpen /dev/vg01/lvol1 house Enter passphrase for /dev/vg01/lvol1: [root@rhel6 ~]# mkfs.ext4 /dev/mapper/house mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=1024 (log=0) Fragment size=1024 (log=0) Stride=0 blocks, Stripe width=0 blocks 127512 inodes, 509952 blocks 25497 blocks (5.00%) reserved for the super user First data block=1 Maximum filesystem blocks=67633152 63 block groups 8192 blocks per group, 8192 fragments per group 2024 inodes per group Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 Writing inode tables: done Creating journal (8192 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 21 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@rhel6 ~]# mount -t ext4 /dev/mapper/house /house PS: HERE I have successfully mounted: [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# vim /etc/fstab -> as follow /dev/mapper/house /house ext4 defaults 1 2 [root@rhel6 ~]# vim /etc/crypttab -> entry as follows house /dev/vg01/lvol1 password [root@rhel6 ~]# mount -o remount /house [root@rhel6 ~]# ls /house/ lost+found [root@rhel6 ~]# umount /house/ [root@rhel6 ~]# mount -a -> SUCCESSFUL AGAIN [root@rhel6 ~]# ls /house/ lost+found Please let me know if I am missing anything here. Thanks in advance.

    Read the article

  • running localhost mta vs. php smtp'ing via 3rd party api

    - by nandoP
    so the question is, would it be "better" to run localhost mta (ie. postfix) or "better" to use 3rd party restful api, embedded in the application, to send email? i am curious what people would do here. i find postfix on linux allows much greater flexibility and control. the default sendmail/postfix logging (/var/log/maillog) suits me fine, and you can even set limits via iptables on per uuid's which allows rate-limiting apps.

    Read the article

  • Accessing guests on virtual network when connected to host via PPTP

    - by Viktor Elofsson
    I'm setting up a development machine which runs Ubuntu 12.04 and KVM for virtualization. I have a guest running Ubuntu 12.04 which can be accessed from the host via its IP address which is assigned by libvirt. The guest can also access the internet, no problem there. However, now I want to setup PPTP so I can connect to the host (from my workstation running Windows 7) and directly access guests without relying on SSH port forwarding. I can connect from my W7-machine to the host (PPTP), but I cannot access any virtual machines (which are accessable from the host directly). Relevant configuration files cat /etc/network/interfaces auto lo iface lo inet loopback # device: eth0 auto eth0 iface eth0 inet static address x.x.x.x broadcast x.x.x.x netmask x.x.x.x gateway x.x.x.x # default route to access subnet up route add -net x.x.x.x netmask x.x.x.x gw x.x.x.x eth0 virsh net-edit default <network> <name>default</name> <uuid>xxxxxxxx-72ce-3c20-af0f-d3a010f1bef0</uuid> <forward mode='nat'/> <bridge name='virbr0' stp='on' delay='0' /> <mac address='52:54:00:xx:xx:xx'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254' /> <host mac='52:54:00:yy:yy:yy' name='web1' ip='192.168.122.11' /> </dhcp> </ip> </network> cat /etc/pptpd.conf (commented lines removed) # TAG: option # Specifies the location of the PPP options file. # By default PPP looks in '/etc/ppp/options' # option /etc/ppp/pptpd-options # TAG: logwtmp # Use wtmp(5) to record client connections and disconnections. # logwtmp #(Recommended) localip 192.168.122.1 remoteip 192.168.122.234-238,192.168.122.245 cat /etc/ppp/chap-secrets* # Secrets for authentication using CHAP # client server secret IP addresses xxxxx * yyyyyyyyyy 192.168.122.100 I get the correct IP address when connecting my W7-machine, but when I try to ping the virtual machine at 192.168.122.11 I get Reply from 192.168.122.1: Destination port unreachable. It's probably something trivial I'm missing but I can't for the life of me figure out what it is. So I'm turning to you, serverfault.

    Read the article

  • xen-create-image does not create inird or initramfs image and domU does not starts with system image

    - by user219372
    I have Fedora 19 as Dom0. To create image I run # xen-create-image --hostname=debian-wheezy --memory=512Mb --dhcp --size=20Gb --swap=512Mb --dir=/xen --arch=amd64 --dist=wheezy After generation finished I start vm and see: # xl create /etc/xen/debian-wheezy.cfg Parsing config from /etc/xen/debian-wheezy.cfg libxl: error: libxl_dom.c:409:libxl__build_pv: xc_dom_ramdisk_file failed: No such file or directory libxl: error: libxl_create.c:919:domcreate_rebuild_done: cannot (re-)build domain: -3 In the /etc/xen/debian-wheezy.cfg i have # # Kernel + memory size # kernel = '/boot/vmlinuz-3.11.2-201.fc19.x86_64' ramdisk = '/boot/initrd.img-3.11.2-201.fc19.x86_64' and ls -1 /boot/*201* shows /boot/config-3.11.2-201.fc19.x86_64 /boot/initramfs-3.11.2-201.fc19.x86_64.img /boot/System.map-3.11.2-201.fc19.x86_64 /boot/vmlinuz-3.11.2-201.fc19.x86_64 Then if I fix ramdisk directive in .cfg file to /boot/initramfs-3.11.2-201.fc19.x86_64.img vm will start but os inside will not boot. In a tail of xl console I get [ OK ] Reached target Basic System. dracut-initqueue[130]: Warning: Could not boot. dracut-initqueue[130]: Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist dracut-initqueue[130]: Warning: /dev/fedora/root does not exist dracut-initqueue[130]: Warning: /dev/fedora/swap does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-root does not exist dracut-initqueue[130]: Warning: /dev/mapper/fedora-swap does not exist dracut-initqueue[130]: Warning: /dev/xvda2 does not exist Starting Dracut Emergency Shell... Warning: /dev/disk/by-uuid/085883ad-73ca-45cc-8bc5-e6249f869b26 does not exist Warning: /dev/fedora/root does not exist Warning: /dev/fedora/swap does not exist Warning: /dev/mapper/fedora-root does not exist Warning: /dev/mapper/fedora-swap does not exist Warning: /dev/xvda2 does not exist Generating "/run/initramfs/sosreport.txt" Entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to save "/run/initramfs/sosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report. dracut:/# .img files in /xen/domains/debian-wheezy exists and listed in disk section of debian-wheezy.cfg So what should i do? Update: I've found that xl does not mount images. In debian-wheezy.cfg I have that: root = '/dev/xvda2 ro' disk = [ 'file:/xen/domains/debian-wheezy/disk.img,xvda2,w', 'file:/xen/domains/debian-wheeze/swap.img,xvda1,w', ] And there is no /dev/xvda* or /dev/sda* or /dev/hda* files in VM.

    Read the article

  • How to use nginx to proxy to a host requiring authentication?

    - by bwizzy
    How can I setup an nginx proxy_pass directive that will also include HTTP Basic authentication information sent to the proxy host? This is an example of the URL I need to proxy to: http://username:[email protected]/export?uuid=1234567890 The end goal is to allow 1 server present files from another server (the one we're proxying to) without exposing the URI of the proxy server. I have this working 90% correct now from following the Nginx config found here: http://kovyrin.net/2010/07/24/nginx-fu-x-accel-redirect-remote/ I just need to add in the HTTP Basic authentication to send to the proxy server

    Read the article

  • flashcache with mdadm and LVM

    - by Backtogeek
    I am having trouble setting up flashcache on a system with LVM and mdadm, I suspect I am either just missing an obvious step or getting some mapping wrong and hoped someone could point me in the right direction? system info: CentOS 6.4 64 bit mdadm config md0 : active raid1 sdd3[2] sde3[3] sdf3[4] sdg3[5] sdh3[1] sda3[0] 204736 blocks super 1.0 [6/6] [UUUUUU] md2 : active raid6 sdd5[2] sde5[3] sdf5[4] sdg5[5] sdh5[1] sda5[0] 3794905088 blocks super 1.1 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU] md3 : active raid0 sdc1[1] sdb1[0] 250065920 blocks super 1.1 512k chunks md1 : active raid10 sdh1[1] sda1[0] sdd1[2] sdf1[4] sdg1[5] sde1[3] 76749312 blocks super 1.1 512K chunks 2 near-copies [6/6] [UUUUUU] pcsvan PV /dev/mapper/ssdcache VG Xenvol lvm2 [3.53 TiB / 3.53 TiB free] Total: 1 [3.53 TiB] / in use: 1 [3.53 TiB] / in no VG: 0 [0 ] flashcache create command used: flashcache_create -p back ssdcache /dev/md3 /dev/md2 pvdisplay --- Physical volume --- PV Name /dev/mapper/ssdcache VG Name Xenvol PV Size 3.53 TiB / not usable 106.00 MiB Allocatable yes PE Size 128.00 MiB Total PE 28952 Free PE 28912 Allocated PE 40 PV UUID w0ENVR-EjvO-gAZ8-TQA1-5wYu-ISOk-pJv7LV vgdisplay --- Volume group --- VG Name Xenvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 3.53 TiB PE Size 128.00 MiB Total PE 28952 Alloc PE / Size 40 / 5.00 GiB Free PE / Size 28912 / 3.53 TiB VG UUID 7vfKWh-ENPb-P8dV-jVlb-kP0o-1dDd-N8zzYj So that is where I am at, I thought that was the job done however when creating a logical volume called test and mounting it is /mnt/test the sequential write is pathetic, 60 ish MB/s /dev/md3 has 2 x SSD's in Raid0 which alone is performing at around 800 MB/s sequential write and I am trying to cache /dev/md2 which is 6 x 1TB drives in raid6 I have read a number of pages through the day and some of them here, it is obvious from the results that the cache is not functioning but I am unsure why. I have added the filter line in the lvm.conf filter = [ "r|/dev/sdb|", "r|/dev/sdc|", "r|/dev/md3|" ] It is probably something silly but the cache is clearly performing no writes so I suspect I am not mapping it or have not mounted the cache correctly. dmsetup status ssdcache: 0 7589810176 flashcache stats: reads(142), writes(0) read hits(133), read hit percent(93) write hits(0) write hit percent(0) dirty write hits(0) dirty write hit percent(0) replacement(0), write replacement(0) write invalidates(0), read invalidates(0) pending enqueues(0), pending inval(0) metadata dirties(0), metadata cleans(0) metadata batch(0) metadata ssd writes(0) cleanings(0) fallow cleanings(0) no room(0) front merge(0) back merge(0) force_clean_block(0) disk reads(9), disk writes(0) ssd reads(133) ssd writes(9) uncached reads(0), uncached writes(0), uncached IO requeue(0) disk read errors(0), disk write errors(0) ssd read errors(0) ssd write errors(0) uncached sequential reads(0), uncached sequential writes(0) pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0) lru hot blocks(31136000), lru warm blocks(31136000) lru promotions(0), lru demotions(0) Xenvol-test: 0 10485760 linear I have included as much info as I can think of, look forward to any replies.

    Read the article

  • Installing ArchLinux into Ubuntu 12.04 root

    - by Johnny
    Is it possible to install 2 linux distros into 1 root, so they share same uuid and guid, configs and packages + same user /home folder ? For example: I have Ubuntu and Windows 7 already in dual boot on my laptop. Could I install Arch's base, base-devel and kernel, so it won't conflict with Ubuntu on the same root folder? P.S I don't feel like repartitioning my drive again, 'cause there's very complicated hierarchy, which occupies the entire disk. =)

    Read the article

  • EXC_BAD_INSTRUCTION (SIGILL) at random during use of app. Bug in AppKit?

    - by Ger Teunis
    I'm currently testing a new version of an app of mine on OSX 10.5 An user reported some weird crashes during use of the application, sadly not reproducible by me. At first sight it seems to happen randomly, once he had the crash while opening an NSOpenPanel and once during focusing an NSTextField and once during NSView switch in a parent view. If you have any idea which area I should look at it would be greatly appreciated! I'm completely lost here. App is compiled in XCode 3.2.1 with SDK 10.5 and targetted at 10.5 He send me these crashes: Crash 1 Process: NZBVortex [43622] Path: /Users/cero/Downloads/NZBVortex.app/Contents/MacOS/NZBVortex Identifier: com.NZBVortex.NZBVortex Version: 0.5.5 (0.5.5) Code Type: X86-64 (Native) Parent Process: launchd [97] Interval Since Last Report: 1951 sec Crashes Since Last Report: 1 Per-App Interval Since Last Report: 1858 sec Per-App Crashes Since Last Report: 1 Date/Time: 2010-03-23 23:43:49.671 +0100 OS Version: Mac OS X 10.5.8 (9L31a) Report Version: 6 Anonymous UUID: 98AB0386-590B-4E0D-B7AC-3F7AA4E7238E Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000 Crashed Thread: 0 Application Specific Information: objc[43622]: alt handlers in objc runtime are buggy! - Hide quoted text - Thread 0 Crashed: 0 libobjc.A.dylib 0x00007fff82baef6e _objc_fatal + 238 1 libobjc.A.dylib 0x00007fff82bb2ea4 objc_addExceptionHandler + 302 2 com.apple.CoreFoundation 0x00007fff842b1090 _CFDoExceptionOperation + 528 3 com.apple.AppKit 0x00007fff81f75e26 _NSAppKitLock + 81 4 com.apple.AppKit 0x00007fff81f80f8f -[NSView nextKeyView] + 56 5 com.apple.AppKit 0x00007fff81f81018 -[NSView _primitiveSetNextKeyView:] + 72 6 com.apple.AppKit 0x00007fff820732b1 -[NSView _recursiveSetDefaultKeyViewLoop] + 242 7 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 8 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 9 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 10 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 11 com.apple.AppKit 0x00007fff82072fc3 -[NSView _setDefaultKeyViewLoop] + 279 12 com.apple.AppKit 0x00007fff82072e70 -[NSWindow recalculateKeyViewLoop] + 36 13 com.apple.AppKit 0x00007fff821dd149 -[NSSavePanel(NSSavePanelRuntime) _loadPreviousModeAndLayout] + 39 14 com.apple.AppKit 0x00007fff821dcf9e -[NSSavePanel(NSSavePanelRuntime) runModalForDirectory:file:types:] + 71 15 com.NZBVortex.NZBVortex 0x000000010000b7ee -[MainWindowViewController openNZBFileButtonClick:] + 62 16 com.apple.AppKit 0x00007fff821c96bf -[NSToolbarButton sendAction:to:] + 77 17 com.apple.AppKit 0x00007fff821c8bb7 -[NSToolbarItemViewer mouseDown:] + 5362 18 com.apple.AppKit 0x00007fff82082783 -[NSWindow sendEvent:] + 5068 19 com.apple.AppKit 0x00007fff8204fd46 -[NSApplication sendEvent:] + 5089 20 com.apple.AppKit 0x00007fff81faa562 -[NSApplication run] + 497 21 com.apple.AppKit 0x00007fff81f772f0 NSApplicationMain + 373 22 com.NZBVortex.NZBVortex 0x0000000100012a69 main + 9 23 com.NZBVortex.NZBVortex 0x0000000100001a84 start + 52 Crash 2 Process: NZBVortex [43600] Path: /Users/cero/Downloads/NZBVortex.app/Contents/MacOS/NZBVortex Identifier: com.NZBVortex.NZBVortex Version: 0.5.5 (0.5.5) Code Type: X86-64 (Native) Parent Process: launchd [97] Interval Since Last Report: 727 sec Crashes Since Last Report: 1 Per-App Interval Since Last Report: 616 sec Per-App Crashes Since Last Report: 1 Date/Time: 2010-03-23 23:11:20.000 +0100 OS Version: Mac OS X 10.5.8 (9L31a) Report Version: 6 Anonymous UUID: 98AB0386-590B-4E0D-B7AC-3F7AA4E7238E Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000 Crashed Thread: 0 Application Specific Information: objc[43600]: alt handlers in objc runtime are buggy! Thread 0 Crashed: 0 libobjc.A.dylib 0x00007fff82baef6e _objc_fatal + 238 1 libobjc.A.dylib 0x00007fff82bb2ea4 objc_addExceptionHandler + 302 2 com.apple.CoreFoundation 0x00007fff842b1090 _CFDoExceptionOperation + 528 3 com.apple.AppKit 0x00007fff81f75e26 _NSAppKitLock + 81 4 com.apple.AppKit 0x00007fff81f80f8f -[NSView nextKeyView] + 56 5 com.apple.AppKit 0x00007fff81f81018 -[NSView _primitiveSetNextKeyView:] + 72 6 com.apple.AppKit 0x00007fff820732b1 -[NSView _recursiveSetDefaultKeyViewLoop] + 242 7 com.apple.AppKit 0x00007fff82156700 -[NSTabView _recursiveSetDefaultKeyViewLoop] + 119 8 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 9 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 10 com.apple.AppKit 0x00007fff82072fc3 -[NSView _setDefaultKeyViewLoop] + 279 11 com.apple.AppKit 0x00007fff82072e70 -[NSWindow recalculateKeyViewLoop] + 36 12 com.NZBVortex.NZBVortex 0x000000010000b527 -[MainWindowViewController showView:sender:] + 1639 13 com.NZBVortex.NZBVortex 0x000000010000ae6b -[MainWindowViewController preferencesSaveAlertDidEnd:returnCode:contextInfo:] + 91 14 com.apple.AppKit 0x00007fff82224291 -[NSAlert didEndAlert:returnCode:contextInfo:] + 107 15 com.apple.AppKit 0x00007fff82224197 -[NSAlert buttonPressed:] + 279 16 com.apple.AppKit 0x00007fff82085d46 -[NSApplication sendAction:to:from:] + 97 17 com.apple.AppKit 0x00007fff82085c7f -[NSControl sendAction:to:] + 97 18 com.apple.AppKit 0x00007fff820851b0 -[NSCell trackMouse:inRect:ofView:untilMouseUp:] + 1841 19 com.apple.AppKit 0x00007fff820849d6 -[NSButtonCell trackMouse:inRect:ofView:untilMouseUp:] + 611 20 com.apple.AppKit 0x00007fff8208422f -[NSControl mouseDown:] + 735 21 com.apple.AppKit 0x00007fff82082783 -[NSWindow sendEvent:] + 5068 22 com.apple.AppKit 0x00007fff8204fd46 -[NSApplication sendEvent:] + 5089 23 com.apple.AppKit 0x00007fff81faa562 -[NSApplication run] + 497 24 com.apple.AppKit 0x00007fff81f772f0 NSApplicationMain + 373 25 com.NZBVortex.NZBVortex 0x0000000100012a69 main + 9 26 com.NZBVortex.NZBVortex 0x0000000100001a84 start + 52 Crash 3 Process: NZBVortex [43520] Path: /Users/cero/Downloads/NZBVortex.app/Contents/MacOS/NZBVortex Identifier: com.NZBVortex.NZBVortex Version: 0.5.5 (0.5.5) Code Type: X86-64 (Native) Parent Process: launchd [97] Interval Since Last Report: 23487 sec Crashes Since Last Report: 2 Per-App Interval Since Last Report: 2025 sec Per-App Crashes Since Last Report: 1 Date/Time: 2010-03-23 22:59:05.484 +0100 OS Version: Mac OS X 10.5.8 (9L31a) Report Version: 6 Anonymous UUID: 98AB0386-590B-4E0D-B7AC-3F7AA4E7238E Exception Type: EXC_BAD_INSTRUCTION (SIGILL) Exception Codes: 0x0000000000000001, 0x0000000000000000 Crashed Thread: 0 Application Specific Information: objc[43520]: alt handlers in objc runtime are buggy! Thread 0 Crashed: 0 libobjc.A.dylib 0x00007fff82baef6e _objc_fatal + 238 1 libobjc.A.dylib 0x00007fff82bb2ea4 objc_addExceptionHandler + 302 2 com.apple.CoreFoundation 0x00007fff842b1090 _CFDoExceptionOperation + 528 3 com.apple.AppKit 0x00007fff81f75e26 _NSAppKitLock + 81 4 com.apple.AppKit 0x00007fff81f80f8f -[NSView nextKeyView] + 56 5 com.apple.AppKit 0x00007fff81f81018 -[NSView _primitiveSetNextKeyView:] + 72 6 com.apple.AppKit 0x00007fff820732b1 -[NSView _recursiveSetDefaultKeyViewLoop] + 242 7 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 8 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 9 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 10 com.apple.AppKit 0x00007fff82073300 -[NSView _recursiveSetDefaultKeyViewLoop] + 321 11 com.apple.AppKit 0x00007fff82072fc3 -[NSView _setDefaultKeyViewLoop] + 279 12 com.apple.AppKit 0x00007fff82072e70 -[NSWindow recalculateKeyViewLoop] + 36 13 com.apple.AppKit 0x00007fff821dd149 -[NSSavePanel(NSSavePanelRuntime) _loadPreviousModeAndLayout] + 39 14 com.apple.AppKit 0x00007fff821dcf9e -[NSSavePanel(NSSavePanelRuntime) runModalForDirectory:file:types:] + 71 15 com.NZBVortex.NZBVortex 0x000000010000b7ee -[MainWindowViewController openNZBFileButtonClick:] + 62 16 com.apple.AppKit 0x00007fff821c96bf -[NSToolbarButton sendAction:to:] + 77 17 com.apple.AppKit 0x00007fff821c8bb7 -[NSToolbarItemViewer mouseDown:] + 5362 18 com.apple.AppKit 0x00007fff82082783 -[NSWindow sendEvent:] + 5068 19 com.apple.AppKit 0x00007fff8204fd46 -[NSApplication sendEvent:] + 5089 20 com.apple.AppKit 0x00007fff81faa562 -[NSApplication run] + 497 21 com.apple.AppKit 0x00007fff81f772f0 NSApplicationMain + 373 22 com.NZBVortex.NZBVortex 0x0000000100012a69 main + 9 23 com.NZBVortex.NZBVortex 0x0000000100001a84 start + 52

    Read the article

  • WCF WS-Security and WSE Nonce Authentication

    - by Rick Strahl
    WCF makes it fairly easy to access WS-* Web Services, except when you run into a service format that it doesn't support. Even then WCF provides a huge amount of flexibility to make the service clients work, however finding the proper interfaces to make that happen is not easy to discover and for the most part undocumented unless you're lucky enough to run into a blog, forum or StackOverflow post on the matter. This is definitely true for the Password Nonce as part of the WS-Security/WSE protocol, which is not natively supported in WCF. Specifically I had a need to create a WCF message on the client that includes a WS-Security header that looks like this from their spec document:<soapenv:Header> <wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <wsse:UsernameToken wsu:Id="UsernameToken-8" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <wsse:Username>TeStUsErNaMe1</wsse:Username> <wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >TeStPaSsWoRd1</wsse:Password> <wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >f8nUe3YupTU5ISdCy3X9Gg==</wsse:Nonce> <wsu:Created>2011-05-04T19:01:40.981Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soapenv:Header> Specifically, the Nonce and Created keys are what WCF doesn't create or have a built in formatting for. Why is there a nonce? My first thought here was WTF? The username and password are there in clear text, what does the Nonce accomplish? The Nonce and created keys are are part of WSE Security specification and are meant to allow the server to detect and prevent replay attacks. The hashed nonce should be unique per request which the server can store and check for before running another request thus ensuring that a request is not replayed with exactly the same values. Basic ServiceUtl Import - not much Luck The first thing I did when I imported this service with a service reference was to simply import it as a Service Reference. The Add Service Reference import automatically detects that WS-Security is required and appropariately adds the WS-Security to the basicHttpBinding in the config file:<?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <basicHttpBinding> <binding name="RealTimeOnlineSoapBinding"> <security mode="Transport" /> </binding> <binding name="RealTimeOnlineSoapBinding1" /> </basicHttpBinding> </bindings> <client> <endpoint address="https://notarealurl.com:443/services/RealTimeOnline" binding="basicHttpBinding" bindingConfiguration="RealTimeOnlineSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> </configuration> If if I run this as is using code like this:var client = new RealTimeOnlineClient(); client.ClientCredentials.UserName.UserName = "TheUsername"; client.ClientCredentials.UserName.Password = "ThePassword"; … I get nothing in terms of WS-Security headers. The request is sent, but the the binding expects transport level security to be applied, rather than message level security. To fix this so that a WS-Security message header is sent the security mode can be changed to: <security mode="TransportWithMessageCredential" /> Now if I re-run I at least get a WS-Security header which looks like this:<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <u:Timestamp u:Id="_0"> <u:Created>2012-11-24T02:55:18.011Z</u:Created> <u:Expires>2012-11-24T03:00:18.011Z</u:Expires> </u:Timestamp> <o:UsernameToken u:Id="uuid-18c215d4-1106-40a5-8dd1-c81fdddf19d3-1"> <o:Username>TheUserName</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> Closer! Now the WS-Security header is there along with a timestamp field (which might not be accepted by some WS-Security expecting services), but there's no Nonce or created timestamp as required by my original service. Using a CustomBinding instead My next try was to go with a CustomBinding instead of basicHttpBinding as it allows a bit more control over the protocol and transport configurations for the binding. Specifically I can explicitly specify the message protocol(s) used. Using configuration file settings here's what the config file looks like:<?xml version="1.0"?> <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="CustomSoapBinding"> <security includeTimestamp="false" authenticationMode="UserNameOverTransport" defaultAlgorithmSuite="Basic256" requireDerivedKeys="false" messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10"> </security> <textMessageEncoding messageVersion="Soap11"></textMessageEncoding> <httpsTransport maxReceivedMessageSize="2000000000"/> </binding> </customBinding> </bindings> <client> <endpoint address="https://notrealurl.com:443/services/RealTimeOnline" binding="customBinding" bindingConfiguration="CustomSoapBinding" contract="RealTimeOnline.RealTimeOnline" name="RealTimeOnline" /> </client> </system.serviceModel> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/> </startup> </configuration> This ends up creating a cleaner header that's missing the timestamp field which can cause some services problems. The WS-Security header output generated with the above looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-291622ca-4c11-460f-9886-ac1c78813b24-1"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText" >ThePassword</o:Password> </o:UsernameToken> </o:Security> </s:Header> This is closer as it includes only the username and password. The key here is the protocol for WS-Security:messageSecurityVersion="WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10" which explicitly specifies the protocol version. There are several variants of this specification but none of them seem to support the nonce unfortunately. This protocol does allow for optional omission of the Nonce and created timestamp provided (which effectively makes those keys optional). With some services I tried that requested a Nonce just using this protocol actually worked where the default basicHttpBinding failed to connect, so this is a possible solution for access to some services. Unfortunately for my target service that was not an option. The nonce has to be there. Creating Custom ClientCredentials As it turns out WCF doesn't have support for the Digest Nonce as part of WS-Security, and so as far as I can tell there's no way to do it just with configuration settings. I did a bunch of research on this trying to find workarounds for this, and I did find a couple of entries on StackOverflow as well as on the MSDN forums. However, none of these are particularily clear and I ended up using bits and pieces of several of them to arrive at a working solution in the end. http://stackoverflow.com/questions/896901/wcf-adding-nonce-to-usernametoken http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/4df3354f-0627-42d9-b5fb-6e880b60f8ee The latter forum message is the more useful of the two (the last message on the thread in particular) and it has most of the information required to make this work. But it took some experimentation for me to get this right so I'll recount the process here maybe a bit more comprehensively. In order for this to work a number of classes have to be overridden: ClientCredentials ClientCredentialsSecurityTokenManager WSSecurityTokenizer The idea is that we need to create a custom ClientCredential class to hold the custom properties so they can be set from the UI or via configuration settings. The TokenManager and Tokenizer are mainly required to allow the custom credentials class to flow through the WCF pipeline and eventually provide custom serialization. Here are the three classes required and their full implementations:public class CustomCredentials : ClientCredentials { public CustomCredentials() { } protected CustomCredentials(CustomCredentials cc) : base(cc) { } public override System.IdentityModel.Selectors.SecurityTokenManager CreateSecurityTokenManager() { return new CustomSecurityTokenManager(this); } protected override ClientCredentials CloneCore() { return new CustomCredentials(this); } } public class CustomSecurityTokenManager : ClientCredentialsSecurityTokenManager { public CustomSecurityTokenManager(CustomCredentials cred) : base(cred) { } public override System.IdentityModel.Selectors.SecurityTokenSerializer CreateSecurityTokenSerializer(System.IdentityModel.Selectors.SecurityTokenVersion version) { return new CustomTokenSerializer(System.ServiceModel.Security.SecurityVersion.WSSecurity11); } } public class CustomTokenSerializer : WSSecurityTokenSerializer { public CustomTokenSerializer(SecurityVersion sv) : base(sv) { } protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); // in this case password is plain text // for digest mode password needs to be encoded as: // PasswordAsDigest = Base64(SHA-1(Nonce + Created + Password)) // and profile needs to change to //string password = GetSHA1String(nonce + createdStr + userToken.Password); string password = userToken.Password; writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } protected string GetSHA1String(string phrase) { SHA1CryptoServiceProvider sha1Hasher = new SHA1CryptoServiceProvider(); byte[] hashedDataBytes = sha1Hasher.ComputeHash(Encoding.UTF8.GetBytes(phrase)); return Convert.ToBase64String(hashedDataBytes); } } Realistically only the CustomTokenSerializer has any significant code in. The code there deals with actually serializing the custom credentials using low level XML semantics by writing output into an XML writer. I can't take credit for this code - most of the code comes from the MSDN forum post mentioned earlier - I made a few adjustments to simplify the nonce generation and also added some notes to allow for PasswordDigest generation. Per spec the nonce is nothing more than a unique value that's supposed to be 'random'. I'm thinking that this value can be any string that's unique and a GUID on its own probably would have sufficed. Comments on other posts that GUIDs can be potentially guessed are highly exaggerated to say the least IMHO. To satisfy even that aspect though I added the SHA1 encryption and binary decoding to give a more random value that would be impossible to 'guess'. The original example from the forum post used another level of encoding and decoding to string in between - but that really didn't accomplish anything but extra overhead. The header output generated from this looks like this:<s:Header> <o:Security s:mustUnderstand="1" xmlns:o="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd"> <o:UsernameToken u:Id="uuid-f43d8b0d-0ebb-482e-998d-f544401a3c91-1" xmlns:u="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"> <o:Username>TheUsername</o:Username> <o:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">ThePassword</o:Password> <o:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" >PjVE24TC6HtdAnsf3U9c5WMsECY=</o:Nonce> <u:Created>2012-11-23T07:10:04.670Z</u:Created> </o:UsernameToken> </o:Security> </s:Header> which is exactly as it should be. Password Digest? In my case the password is passed in plain text over an SSL connection, so there's no digest required so I was done with the code above. Since I don't have a service handy that requires a password digest,  I had no way of testing the code for the digest implementation, but here is how this is likely to work. If you need to pass a digest encoded password things are a little bit trickier. The password type namespace needs to change to: http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest and then the password value needs to be encoded. The format for password digest encoding is this: Base64(SHA-1(Nonce + Created + Password)) and it can be handled in the code above with this code (that's commented in the snippet above): string password = GetSHA1String(nonce + createdStr + userToken.Password); The entire WriteTokenCore method for digest code looks like this:protected override void WriteTokenCore(System.Xml.XmlWriter writer, System.IdentityModel.Tokens.SecurityToken token) { UserNameSecurityToken userToken = token as UserNameSecurityToken; string tokennamespace = "o"; DateTime created = DateTime.Now; string createdStr = created.ToString("yyyy-MM-ddThh:mm:ss.fffZ"); // unique Nonce value - encode with SHA-1 for 'randomness' // in theory the nonce could just be the GUID by itself string phrase = Guid.NewGuid().ToString(); var nonce = GetSHA1String(phrase); string password = GetSHA1String(nonce + createdStr + userToken.Password); writer.WriteRaw(string.Format( "<{0}:UsernameToken u:Id=\"" + token.Id + "\" xmlns:u=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\">" + "<{0}:Username>" + userToken.UserName + "</{0}:Username>" + "<{0}:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#Digest\">" + password + "</{0}:Password>" + "<{0}:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">" + nonce + "</{0}:Nonce>" + "<u:Created>" + createdStr + "</u:Created></{0}:UsernameToken>", tokennamespace)); } I had no service to connect to to try out Digest auth - if you end up needing it and get it to work please drop a comment… How to use the custom Credentials The easiest way to use the custom credentials is to create the client in code. Here's a factory method I use to create an instance of my service client:  public static RealTimeOnlineClient CreateRealTimeOnlineProxy(string url, string username, string password) { if (string.IsNullOrEmpty(url)) url = "https://notrealurl.com:443/cows/services/RealTimeOnline"; CustomBinding binding = new CustomBinding(); var security = TransportSecurityBindingElement.CreateUserNameOverTransportBindingElement(); security.IncludeTimestamp = false; security.DefaultAlgorithmSuite = SecurityAlgorithmSuite.Basic256; security.MessageSecurityVersion = MessageSecurityVersion.WSSecurity10WSTrustFebruary2005WSSecureConversationFebruary2005WSSecurityPolicy11BasicSecurityProfile10; var encoding = new TextMessageEncodingBindingElement(); encoding.MessageVersion = MessageVersion.Soap11; var transport = new HttpsTransportBindingElement(); transport.MaxReceivedMessageSize = 20000000; // 20 megs binding.Elements.Add(security); binding.Elements.Add(encoding); binding.Elements.Add(transport); RealTimeOnlineClient client = new RealTimeOnlineClient(binding, new EndpointAddress(url)); // to use full client credential with Nonce uncomment this code: // it looks like this might not be required - the service seems to work without it client.ChannelFactory.Endpoint.Behaviors.Remove<System.ServiceModel.Description.ClientCredentials>(); client.ChannelFactory.Endpoint.Behaviors.Add(new CustomCredentials()); client.ClientCredentials.UserName.UserName = username; client.ClientCredentials.UserName.Password = password; return client; } This returns a service client that's ready to call other service methods. The key item in this code is the ChannelFactory endpoint behavior modification that that first removes the original ClientCredentials and then adds the new one. The ClientCredentials property on the client is read only and this is the way it has to be added.   Summary It's a bummer that WCF doesn't suport WSE Security authentication with nonce values out of the box. From reading the comments in posts/articles while I was trying to find a solution, I found that this feature was omitted by design as this protocol is considered unsecure. While I agree that plain text passwords are rarely a good idea even if they go over secured SSL connection as WSE Security does, there are unfortunately quite a few services (mosly Java services I suspect) that use this protocol. I've run into this twice now and trying to find a solution online I can see that this is not an isolated problem - many others seem to have struggled with this. It seems there are about a dozen questions about this on StackOverflow all with varying incomplete answers. Hopefully this post provides a little more coherent content in one place. Again I marvel at WCF and its breadth of support for protocol features it has in a single tool. And even when it can't handle something there are ways to get it working via extensibility. But at the same time I marvel at how freaking difficult it is to arrive at these solutions. I mean there's no way I could have ever figured this out on my own. It takes somebody working on the WCF team or at least being very, very intricately involved in the innards of WCF to figure out the interconnection of the various objects to do this from scratch. Luckily this is an older problem that has been discussed extensively online and I was able to cobble together a solution from the online content. I'm glad it worked out that way, but it feels dirty and incomplete in that there's a whole learning path that was omitted to get here… Man am I glad I'm not dealing with SOAP services much anymore. REST service security - even when using some sort of federation is a piece of cake by comparison :-) I'm sure once standards bodies gets involved we'll be right back in security standard hell…© Rick Strahl, West Wind Technologies, 2005-2012Posted in WCF  Web Services   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to automatically mount hibernated NTFS to read-only?

    - by Piotr
    Is there any way to set up Ubuntu this way: If I can't mount the filesystem in rw mode, then mount it in ro mode in the same directory. In result I should not come across the notification that the system can't mount the filesystem (Skip or manual fix notification). SO when I start the system I should have my ntfs partitions mounted either in rw or ro mode depends if the windows is hibernated. fstab entry: #/dev/sda7 UUID=D0B43178B43161E0 /media/Dane ntfs defaults,errors=remount-ro 0 1 "mount -a" result: The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Failed to mount '/dev/sda7': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option. I have ubuntu 13.10 and win8. I use uefi secure boot.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >