Search Results

Search found 19256 results on 771 pages for 'boost log'.

Page 106/771 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Axis webservice calls fail sometimes, access.log shows content!

    - by epischel
    Hi, our app is a webservice client (axis 1) to a third party webservice (also axis 1). We use it for some years now. Since a few weeks, we (as a client) get sometimes HTTP status 400 (bad request) or read timeouts when calling the webservice. Strangely, the access.log of the service shows part of the request or the response instead of the URL. It looks like this (looks like the end of the request string) x.x.x.x -> y.y.y.y:8080 - - [timestamp] "POST /webservice HTTP/1.0" 200 16127 0 x.x.x.x -> y.y.y.y:8080 - - [timestamp] "POST /webservice HTTP/1.0" 200 22511 1 x.x.x.x -> y.y.y.y:8080 - - [timestamp] "il=\"true\"/><nsl:text xsi:type=\"xsd:string\" xsi:nil=\"true\"/></SOAPSomeOperation></soapenv:Body></soapenv:Envelope> Axis/1.4" 400 299 0 or (some string out of the what looks like the request) x.x.x.x -> y.y.y.y:8080 - - [timestamp] ":string\">some text</sometag><othertag>moretext" 400 299 0 or in some other cases it looks like two requests thrown together (... means xml string left out): x.x.x.x -> y.y.y.y:8080 - - [timestamp] "...</someop></soapenv:Body></soapenv:Envelope>:xsd=\"http://www.w3.org/2001/XMLSchema\"...</soapenv:Body></soapenv:Envelope>" 400 299 0 Application log does not give any hints. Frequency of such call is 1% of all calls to that service. The only discriminator I know of so far is that it happens since operations informed us that the service url changed because of "server migration". Has anyone experienced such phenomenon yet? Has somebody got an idea whats wrong and how to fix? Thanks,

    Read the article

  • Is this too much code for a header only library?

    - by Billy ONeal
    It seems like I had to inline quite a bit of code here. I'm wondering if it's bad design practice to leave this entirely in a header file like this: #pragma once #include <string> #include <boost/noncopyable.hpp> #include <boost/make_shared.hpp> #include <boost/iterator/iterator_facade.hpp> #include <Windows.h> #include "../Exception.hpp" namespace WindowsAPI { namespace FileSystem { class FileData; struct AllResults; struct FilesOnly; template <typename Filter_T = AllResults> class DirectoryIterator; namespace detail { class DirectoryIteratorImpl : public boost::noncopyable { WIN32_FIND_DATAW currentData; HANDLE hFind; std::wstring root; public: inline DirectoryIteratorImpl(); inline explicit DirectoryIteratorImpl(const std::wstring& pathSpec); inline void increment(); inline bool equal(const DirectoryIteratorImpl& other) const; inline const std::wstring& GetPathRoot() const; inline const WIN32_FIND_DATAW& GetCurrentFindData() const; inline ~DirectoryIteratorImpl(); }; } class FileData //Serves as a proxy to the WIN32_FIND_DATA struture inside the iterator. { boost::shared_ptr<detail::DirectoryIteratorImpl> iteratorSource; public: FileData(const boost::shared_ptr<detail::DirectoryIteratorImpl>& parent) : iteratorSource(parent) {}; DWORD GetAttributes() const { return iteratorSource->GetCurrentFindData().dwFileAttributes; }; bool IsDirectory() const { return (GetAttributes() | FILE_ATTRIBUTE_DIRECTORY) != 0; }; bool IsFile() const { return !IsDirectory(); }; bool IsArchive() const { return (GetAttributes() | FILE_ATTRIBUTE_ARCHIVE) != 0; }; bool IsReadOnly() const { return (GetAttributes() | FILE_ATTRIBUTE_READONLY) != 0; }; unsigned __int64 GetSize() const { ULARGE_INTEGER intValue; intValue.LowPart = iteratorSource->GetCurrentFindData().nFileSizeLow; intValue.HighPart = iteratorSource->GetCurrentFindData().nFileSizeHigh; return intValue.QuadPart; }; std::wstring GetFolderPath() const { return iteratorSource->GetPathRoot(); }; std::wstring GetFileName() const { return iteratorSource->GetCurrentFindData().cFileName; }; std::wstring GetFullFileName() const { return GetFolderPath() + GetFileName(); }; std::wstring GetShortFileName() const { return iteratorSource->GetCurrentFindData().cAlternateFileName; }; FILETIME GetCreationTime() const { return iteratorSource->GetCurrentFindData().ftCreationTime; }; FILETIME GetLastAccessTime() const { return iteratorSource->GetCurrentFindData().ftLastAccessTime; }; FILETIME GetLastWriteTime() const { return iteratorSource->GetCurrentFindData().ftLastWriteTime; }; }; struct AllResults : public std::unary_function<const FileData&, bool> { bool operator()(const FileData&) { return true; }; }; struct FilesOnly : public std::unary_function<const FileData&, bool> { bool operator()(const FileData& arg) { return arg.IsFile(); }; }; template <typename Filter_T> class DirectoryIterator : public boost::iterator_facade<DirectoryIterator<Filter_T>, const FileData, std::input_iterator_tag> { friend class boost::iterator_core_access; boost::shared_ptr<detail::DirectoryIteratorImpl> impl; FileData current; Filter_T filter; void increment() { do { impl->increment(); } while (! filter(current)); }; bool equal(const DirectoryIterator& other) const { return impl->equal(*other.impl); }; const FileData& dereference() const { return current; }; public: DirectoryIterator(Filter_T functor = Filter_T()) : impl(boost::make_shared<detail::DirectoryIteratorImpl>()), current(impl), filter(functor) { }; explicit DirectoryIterator(const std::wstring& pathSpec, Filter_T functor = Filter_T()) : impl(boost::make_shared<detail::DirectoryIteratorImpl>(pathSpec)), current(impl), filter(functor) { }; }; namespace detail { DirectoryIteratorImpl::DirectoryIteratorImpl() : hFind(INVALID_HANDLE_VALUE) { } DirectoryIteratorImpl::DirectoryIteratorImpl(const std::wstring& pathSpec) { std::wstring::const_iterator lastSlash = std::find(pathSpec.rbegin(), pathSpec.rend(), L'\\').base(); root.assign(pathSpec.begin(), lastSlash); hFind = FindFirstFileW(pathSpec.c_str(), &currentData); if (hFind == INVALID_HANDLE_VALUE) WindowsApiException::ThrowFromLastError(); while (!wcscmp(currentData.cFileName, L".") || !wcscmp(currentData.cFileName, L"..")) { increment(); } } void DirectoryIteratorImpl::increment() { BOOL success = FindNextFile(hFind, &currentData); if (success) return; DWORD error = GetLastError(); if (error == ERROR_NO_MORE_FILES) { FindClose(hFind); hFind = INVALID_HANDLE_VALUE; } else { WindowsApiException::Throw(error); } } DirectoryIteratorImpl::~DirectoryIteratorImpl() { if (hFind != INVALID_HANDLE_VALUE) FindClose(hFind); } bool DirectoryIteratorImpl::equal(const DirectoryIteratorImpl& other) const { if (this == &other) return true; return hFind == other.hFind; } const std::wstring& DirectoryIteratorImpl::GetPathRoot() const { return root; } const WIN32_FIND_DATAW& DirectoryIteratorImpl::GetCurrentFindData() const { return currentData; } } }}

    Read the article

  • I want my logs sent to my mail with logrotate

    - by lericson
    Not strictly a question about programming as such, more of a log handling question. Anyway. My company has multiple clients, and each of these clients have a set of logs that I'd rather much want to get sent to by e-mail to me. Now, another prerequisite is that they're hilighted by simple HTML. All that is very well, I've managed to make a hilighter for the given log types. So, what I do is I use logrotate's prerotate stuff to send the logs as an e-mail message. Example: /var/log/a.log /var/log/b.log { daily missingok copytruncate prerotate /usr/bin/python /home/foo/hilight_logs /var/log/{a,b}.log | /usr/sbin/sendmail -FLog\ mailer [email protected] [email protected] endscript } The problem with this approach is basically that logrotate sucks: it'll run the command for every log file specified in the specifier, and to my knowledge there's no way to know which of the log files is being handled. (Which wouldn't really help anyway.) Short of repeating the exact same logrotate up to 10 times on different machines, the only thing I can do is just to get bogged down with log spam every night. And I grew tired of it today, so I ask.

    Read the article

  • Login problem with php

    - by shinod
    I want to prevent multiple log in with same log in credentials simultaneously. So I made a column login_status and set it to 1 when some one logging in and change to 0 when logging out besides I set session after successful logged in. If user won't click on log out(in case of user close tab or because of some network problem) it doesn't update database and then one can't use that log in credentials again. So I use a ajax call to set current time stamp in database with related log in credentials and it is updated in each 2 minutes if user not navigate from that page. Then if some one attempts to log in with same log in credentials, it will check these time stamp if column login_status is 1, then if the time stamp is older than 3 minutes it allows the log in.Then it solving that problem. But the new problem is if user closes the tab or browser window and after 3 minutes one can log in with same log in credentials from somewhere and if the previous user open that page automatically it will log in as session is already set. How can I prevent this.

    Read the article

  • Why might my PHP log file not entirely be text?

    - by Fletcher Moore
    I'm trying to debug a plugin-bloated Wordpress installation; so I've added a very simple homebrew logger that records all the callbacks, which are basically listed in a single, ultimately 250+ row multidimensional array in Wordpress (I can't use print_r() because I need to catch them right before they are called). My logger line is $logger->log("\t" . $callback . "\n"); The logger produces a dandy text file in normal situations, but at two points during this particular task it is adding something which causes my log file to no longer be encoded properly. Gedit (I'm on Ubuntu) won't open the file, claiming to not understand the encoding. In vim, the culprit corrupt callback (which I could not find in the debugger, looking at the array) is about in the middle and printed as ^@lambda_546 and at the end of file there's this cute guy ^M. The ^M and ^@ are blue in my vim, which has no color theme set for .txt files. I don't know what it means. I tried adding an is_string($callback) condition, but I get the same results. Any ideas?

    Read the article

  • Is there a way to log every gui event in Delphi?

    - by awmross
    The Delphi debugger is great for debugging linear code, where one function calls other functions in a predictable, linear manner, and we can step through the program line by line. I find the debugger less useful when dealing with event driven gui code, where a single line of code can cause new events to be trigerred, which may in turn trigger other events. In this situation, the 'step through the code' approach doesn't let me see everything that is going on. The way I usually solve this is to 1) guess which events might be part of the problem, then 2) add breakpoints or logging to each of those events. The problem is that this approach is haphazard and time consuming. Is there a switch I can flick in the debugger to say 'log all gui events'? Or is there some code I can add to trap events, something like procedure GuiEventCalled(ev:Event) begin log(ev); ev.call(); end The end result I'm looking for is something like this (for example): FieldA.KeyDown FieldA.KeyPress FieldA.OnChange FieldA.OnExit FieldB.OnEnter This would take all the guesswork out of Delphi gui debugging. I am using Delphi 2010

    Read the article

  • Why does this log output show the same answer in each iteration?

    - by Will Hancock
    OK, I was reading an article on optimising JS for Googles V8 engine, when i saw this code example... I nearly skimmed over it, but then I saw this; |=; a[0] |= b; a = new Array(); a[0] = 0; for (var b = 0; b < 10; b++) { console.log(a, b) a[0] |= b; // Much better! 2x faster. } a[0] |= b; So I ran it, in my console, with a console.log in the loop and resulted in 15; [15] 0 [15] 1 [15] 2 [15] 3 [15] 4 [15] 5 [15] 6 [15] 7 [15] 8 [15] 9 WHAT?!?! Where the hell does it get 15 from, on every iteration?!?!?! I've been a web dev for 7 years, and this has stumped me and a fellow colleague. Can somebody talk me through this code? Cheers.

    Read the article

  • Why is Log4Net not creating log file in production?

    - by uriDium
    I am using VS2005, a website project, a web deployment project and Log4Net. I can use logging when I am developing locally. I can see the log files and everything is fine. When I build my website, (using the web deployment project), I use the deploy as a single DLL option. When I then check the locations of where my log files should be I cannot see any files. Is there a way to troubleshoot this. I don't think adding the debug value to the App Settings will help because I don't have a console because it is a website. EDIT I don't want the 150 rep to go to waste so one last time. I compared the internal trace from my dev environment to the trace from the production. My dev environment trace shows the call the Xml Configurator where the production one does not. I have code in the global.asax on application_start() method. I put debug code in there and it is getting called in dev but not in production. I think this is where the web deployment project is causing some issues. Does the global.asax get compiled into the single DLL? When I do a build in the deployment directory I see a global.compiled file. Must this go into the bin folder in production? Or is the global.asax code in the single DLL? Having both in the bin folder or the just the DLL didn't change anything.

    Read the article

  • Can't get simple Apache VHost up and running

    - by TK Kocheran
    Unfortunately, I can't seem to get a simple Apache VHost online. I used to simply have one VHost which bound to all: <VirtualHost *:80>, but this isn't appropriate for security anymore. I need to have one VHost for localhost requests (ie my dev server) and one for incoming requests via my domain name. Here's my new VHost: NameVirtualHost domain1.com <VirtualHost domain1.com:80> DocumentRoot /var/www ServerName domain1.com </VirtualHost> <VirtualHost domain2.com:80> DocumentRoot /var/www ServerName domain2.com </VirtualHost> After I restart my server, I see the following errors in my log: [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs [Wed Feb 16 11:26:36 2011] [error] [client ####.###.###.###] File does not exist: /htdocs What am I doing wrong? EDIT As per the answer give below, I have modified my configuration. Here are my configuration files: /etc/apache2/ports.conf: Listen 80 <IfModule mod_ssl.c> # If you add NameVirtualHost *:443 here, you will also have to change # the VirtualHost statement in /etc/apache2/sites-available/default-ssl # to <VirtualHost *:443> # Server Name Indication for SSL named virtual hosts is currently not # supported by MSIE on Windows XP. Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> Here are my actual defined sites: /etc/apache2/sites-enabled/000-localhost: NameVirtualHost 127.0.0.1:80 <VirtualHost 127.0.0.1:80> ServerAdmin ######### DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 <Location /> <Limit GET POST PUT> order allow,deny allow from all deny from 65.34.248.110 deny from 69.122.239.3 deny from 58.218.199.147 deny from 65.34.248.110 </Limit> </Location> </VirtualHost> /etc/apache2/sites-enabled/001-rfkrocktk.dyndns.org: NameVirtualHost rfkrocktk.dyndns.org:80 <VirtualHost rfkrocktk.dyndns.org:80> DocumentRoot /var/www ServerName rfkrocktk.dyndns.org </VirtualHost> And, just for kicks, my main file: /etc/apache2/apache2.conf: # # Based upon the NCSA server configuration files originally by Rob McCool. # # This is the main Apache server configuration file. It contains the # configuration directives that give the server its instructions. # See http://httpd.apache.org/docs/2.2/ for detailed information about # the directives. # # Do NOT simply read the instructions in here without understanding # what they do. They're here only as hints or reminders. If you are unsure # consult the online docs. You have been warned. # # The configuration directives are grouped into three basic sections: # 1. Directives that control the operation of the Apache server process as a # whole (the 'global environment'). # 2. Directives that define the parameters of the 'main' or 'default' server, # which responds to requests that aren't handled by a virtual host. # These directives also provide default values for the settings # of all virtual hosts. # 3. Settings for virtual hosts, which allow Web requests to be sent to # different IP addresses or hostnames and have them handled by the # same Apache server process. # # Configuration and logfile names: If the filenames you specify for many # of the server's control files begin with "/" (or "drive:/" for Win32), the # server will use that explicit path. If the filenames do *not* begin # with "/", the value of ServerRoot is prepended -- so "/var/log/apache2/foo.log" # with ServerRoot set to "" will be interpreted by the # server as "//var/log/apache2/foo.log". # ### Section 1: Global Environment # # The directives in this section affect the overall operation of Apache, # such as the number of concurrent requests it can handle or where it # can find its configuration files. # # # ServerRoot: The top of the directory tree under which the server's # configuration, error, and log files are kept. # # NOTE! If you intend to place this on an NFS (or otherwise network) # mounted filesystem then please read the LockFile documentation (available # at <URL:http://httpd.apache.org/docs-2.1/mod/mpm_common.html#lockfile>); # you will save yourself a lot of trouble. # # Do NOT add a slash at the end of the directory path. # ServerRoot "/etc/apache2" # # The accept serialization lock file MUST BE STORED ON A LOCAL DISK. # #<IfModule !mpm_winnt.c> #<IfModule !mpm_netware.c> LockFile /var/lock/apache2/accept.lock #</IfModule> #</IfModule> # # PidFile: The file in which the server should record its process # identification number when it starts. # This needs to be set in /etc/apache2/envvars # PidFile ${APACHE_PID_FILE} # # Timeout: The number of seconds before receives and sends time out. # Timeout 300 # # KeepAlive: Whether or not to allow persistent connections (more than # one request per connection). Set to "Off" to deactivate. # KeepAlive On # # MaxKeepAliveRequests: The maximum number of requests to allow # during a persistent connection. Set to 0 to allow an unlimited amount. # We recommend you leave this number high, for maximum performance. # MaxKeepAliveRequests 100 # # KeepAliveTimeout: Number of seconds to wait for the next request from the # same client on the same connection. # KeepAliveTimeout 15 ## ## Server-Pool Size Regulation (MPM specific) ## # prefork MPM # StartServers: number of server processes to start # MinSpareServers: minimum number of server processes which are kept spare # MaxSpareServers: maximum number of server processes which are kept spare # MaxClients: maximum number of server processes allowed to start # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # worker MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_worker_module> StartServers 2 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxClients 150 MaxRequestsPerChild 0 </IfModule> # event MPM # StartServers: initial number of server processes to start # MaxClients: maximum number of simultaneous client connections # MinSpareThreads: minimum number of worker threads which are kept spare # MaxSpareThreads: maximum number of worker threads which are kept spare # ThreadsPerChild: constant number of worker threads in each server process # MaxRequestsPerChild: maximum number of requests a server process serves <IfModule mpm_event_module> StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 75 ThreadLimit 64 ThreadsPerChild 25 MaxRequestsPerChild 0 </IfModule> # These need to be set in /etc/apache2/envvars User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} # # AccessFileName: The name of the file to look for in each directory # for additional configuration directives. See also the AllowOverride # directive. # AccessFileName .htaccess # # The following lines prevent .htaccess and .htpasswd files from being # viewed by Web clients. # <Files ~ "^\.ht"> Order allow,deny Deny from all Satisfy all </Files> # # DefaultType is the default MIME type the server will use for a document # if it cannot otherwise determine one, such as from filename extensions. # If your server contains mostly text or HTML documents, "text/plain" is # a good value. If most of your content is binary, such as applications # or images, you may want to use "application/octet-stream" instead to # keep browsers from trying to display binary files as though they are # text. # DefaultType text/plain # # HostnameLookups: Log the names of clients or just their IP addresses # e.g., www.apache.org (on) or 204.62.129.132 (off). # The default is off because it'd be overall better for the net if people # had to knowingly turn this feature on, since enabling it means that # each client request will result in AT LEAST one lookup request to the # nameserver. # HostnameLookups Off # ErrorLog: The location of the error log file. # If you do not specify an ErrorLog directive within a <VirtualHost> # container, error messages relating to that virtual host will be # logged here. If you *do* define an error logfile for a <VirtualHost> # container, that host's errors will be logged there and not here. # ErrorLog /var/log/apache2/error.log # # LogLevel: Control the number of messages logged to the error_log. # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. # LogLevel warn # Include module configuration: Include /etc/apache2/mods-enabled/*.load Include /etc/apache2/mods-enabled/*.conf # Include all the user configurations: Include /etc/apache2/httpd.conf # Include ports listing Include /etc/apache2/ports.conf # # The following directives define some format nicknames for use with # a CustomLog directive (see below). # If you are behind a reverse proxy, you might want to change %h into %{X-Forwarded-For}i # LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent # # Define an access log for VirtualHosts that don't define their own logfile CustomLog /var/log/apache2/other_vhosts_access.log vhost_combined # Include of directories ignores editors' and dpkg's backup files, # see README.Debian for details. # Include generic snippets of statements Include /etc/apache2/conf.d/ # Include the virtual host configurations: Include /etc/apache2/sites-enabled/ what else do I need to do to fix it? Should I be telling apache to listen on 127.0.0.1:80, or isn't it already listening there?

    Read the article

  • Mandatory profile on Terminal server fails to load. Userenv.log debug

    - by Datapimp23
    Hi, We're having a lot of corrupted profiles lately on our profile share. At the moment I have no clue why, but I decided to switch to one mandatory profile since the users can all use the same and there is no need to have seperate profiles for each user. Here's what I did. I logged into the Terminal server with a new user and configured some stuff (imported certificates and a few files). Then I logged out. Later as admin I copied the profile to another server and renamed it to bsilo. I made sure the user hive settings were adjusted. Everyone had access to the hive. I shared the bsilo folder with full access for everyone. I set the NTFS permissions to read, read & execute, list folder contents for domain users. I also renamed NTUSER.DAT to NTUSER.MAN. Now I set a env variable %manprofile% on the Terminal server that points to \server\bsilo\ntuser.man I set the env var as terminal services profile path for a test user. When I log in I as the user get the following output The system cannot find the path specified. Can somebody point me in the right direction. Thanks USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised Machine Mutex/Events USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised User Mutex/Events USERENV(1774.d18) 15:52:39:724 LibMain: Process Name: \??\C:\WINDOWS\system32\winlogon.exe USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Yes, we can impersonate the user. Running as self USERENV(1774.d18) 15:52:48:005 ========================================================= USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Entering, hToken = <0x340, lpProfileInfo = 0x6e5d8 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(1774.d18) 15:52:48:005 LoadUserProfile: NULL server name USERENV(1774.d18) 15:52:48:005 LoadUserProfile: no thread token found, impersonating self. USERENV(1774.d18) 15:52:48:005 GetInterface: Returning rpc binding handle USERENV(218.2f94) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.2f94) 15:52:48:005 DropClientContext: Got client token 000009B4, sid = S-1-5-18 USERENV(218.2f94) 15:52:48:005 MIDL_user_allocate enter USERENV(218.2f94) 15:52:48:005 DropClientContext: load profile object successfully made USERENV(218.2f94) 15:52:48:005 DropClientContext: Returning 0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Calling DropClientToken (as self) succeeded USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Cookie generated USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Endpoint generated USERENV(218.1f38) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.1f38) 15:52:48:020 LoadUserProfileI: RPC end point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:48:020 In LoadUserProfileP USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Running as client, sid = S-1-5-18 USERENV(218.1f38) 15:52:48:020 ========================================================= USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Entering, hToken = <0x98c, lpProfileInfo = 0x9c940 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(218.1f38) 15:52:48:020 LoadUserProfile: NULL server name USERENV(218.1f38) 15:52:48:020 LoadUserProfile: User sid: S-1-5-21-807756564-1922302612-1565739477-22627 USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: No existing entry found USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: New entry created USERENV(218.1f38) 15:52:48:020 CHashTable::HashAdd: S-1-5-21-807756564-1922302612-1565739477-22627 added in bucket 11 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Wait succeeded. In critical section. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 TestIfUserProfileLoaded: return with error 2. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 LoadUserProfile: Expanded profile path is \server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Entering, lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: checking x-forest logon, user handle = 2444 USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: policy set to disable XForest check USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Mandatory profile (.man extension) USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Try to bypass CSC USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: tried NPAddConnection3ForCSCAgent. Error 2109 USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Share \server\bsilo mapped to drive E. Returned Path E:\ntuser.man USERENV(218.1f38) 15:52:49:239 ParseProfilePath: CSC bypassed. Profile path E:\ntuser.man USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Tick Count = 0 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: GetFileAttributes found something with attributes <0x2022 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Found a file USERENV(218.1f38) 15:52:49:255 ReportError: Impersonating user. USERENV(218.1f38) 15:52:49:255 ReportError: Logging Error DETAIL - The system cannot find the path specified. USERENV(218.1f38) 15:52:49:255 GetInterface: Returning rpc binding handle USERENV(218.1f38) 15:52:49:255 ReportError: RPC End point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:49:255 ReportError: waiting on rpc async event USERENV(1774.2398) 15:52:49:255 ErrorDialogEx: Calling DialogBoxParam USERENV(1774.2398) 15:52:49:270 ErrorDlgProc:: DialogBoxParam USERENV(218.1f38) 15:52:52:177 RpcAsyncCompleteCall finished, status = 0 USERENV(218.1f38) 15:52:52:177 ReleaseInterface: Releasing rpc binding handle USERENV(218.1f38) 15:52:52:177 LoadUserProfile: ParseProfilePath returned FALSE USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Cancelling connection of E: USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Connection deleted. USERENV(218.1f38) 15:52:52:177 CSyncManager::LeaveLock USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock released USERENV(218.1f38) 15:52:52:192 CHashTable::HashDelete: S-1-5-21-807756564-1922302612-1565739477-22627 deleted USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock deleted USERENV(218.1f38) 15:52:52:192 LoadUserProfile: 003 About Reverted back to user <00000000 USERENV(218.1f38) 15:52:52:192 LoadUserProfile: Leaving with a value of 0. USERENV(218.1f38) 15:52:52:192 ========================================================= USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: LoadUserProfileP failed with 3 USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: returning 3 USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Running as self USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Calling LoadUserProfileI failed. err = 3 USERENV(218.200c) 15:52:52:192 IProfileSecurityCallBack: client authenticated. USERENV(218.200c) 15:52:52:192 ReleaseClientContext: Releasing context USERENV(218.200c) 15:52:52:192 ReleaseClientContext_s: Releasing context USERENV(218.200c) 15:52:52:192 MIDL_user_free enter USERENV(1774.d18) 15:52:52:192 ReleaseInterface: Releasing rpc binding handle USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Returning FALSE. Error = 3

    Read the article

  • Command does not execute in crontab while command itself works just fine

    - by fuzzybee
    I have this script from Colin Johnson on Github - https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-automate-backup It seems great. I have modified it to send email to myself every time an EBS snapshot is created or deleted. The following works like a charm ec2-automate-backup.sh -v "vol-myvolumeid" -k 3 However, it does not execute at all as part of my crontab (I didn't receive any emails) #some command that got commented out */5 * * * * ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3; * * * * * date /root/logs/crontab.log; */5 * * * * date /root/logs/crontab2.log Please note that the 2nd and 3rd execute just fines as I can see the date and time in log files. What could I have missed here? The full ec2-automate-backup.sh is as follows: #!/bin/bash - # Author: Colin Johnson / [email protected] # Date: 2012-09-24 # Version 0.1 # License Type: GNU GENERAL PUBLIC LICENSE, Version 3 # #confirms that executables required for succesful script execution are available prerequisite_check() { for prerequisite in basename ec2-create-snapshot ec2-create-tags ec2-describe-snapshots ec2-delete-snapshot date do #use of "hash" chosen as it is a shell builtin and will add programs to hash table, possibly speeding execution. Use of type also considered - open to suggestions. hash $prerequisite &> /dev/null if [[ $? == 1 ]] #has exits with exit status of 70, executable was not found then echo "In order to use `basename $0`, the executable \"$prerequisite\" must be installed." 1>&2 | mailx -s "Error happened 0" [email protected] ; exit 70 fi done } #get_EBS_List gets a list of available EBS instances depending upon the selection_method of EBS selection that is provided by user input get_EBS_List() { case $selection_method in volumeid) if [[ -z $volumeid ]] then echo "The selection method \"volumeid\" (which is $app_name's default selection_method of operation or requested by using the -s volumeid parameter) requires a volumeid (-v volumeid) for operation. Correct usage is as follows: \"-v vol-6d6a0527\",\"-s volumeid -v vol-6d6a0527\" or \"-v \"vol-6d6a0527 vol-636a0112\"\" if multiple volumes are to be selected." 1>&2 | mailx -s "Error happened 1" [email protected] ; exit 64 fi ebs_selection_string="$volumeid" ;; tag) if [[ -z $tag ]] then echo "The selected selection_method \"tag\" (-s tag) requires a valid tag (-t key=value) for operation. Correct usage is as follows: \"-s tag -t backup=true\" or \"-s tag -t Name=my_tag.\"" 1>&2 | mailx -s "Error happened 2" [email protected] ; exit 64 fi ebs_selection_string="--filter tag:$tag" ;; *) echo "If you specify a selection_method (-s selection_method) for selecting EBS volumes you must select either \"volumeid\" (-s volumeid) or \"tag\" (-s tag)." 1>&2 | mailx -s "Error happened 3" [email protected] ; exit 64 ;; esac #creates a list of all ebs volumes that match the selection string from above ebs_backup_list_complete=`ec2-describe-volumes --show-empty-fields --region $region $ebs_selection_string 2>&1` #takes the output of the previous command ebs_backup_list_result=`echo $?` if [[ $ebs_backup_list_result -gt 0 ]] then echo -e "An error occured when running ec2-describe-volumes. The error returned is below:\n$ebs_backup_list_complete" 1>&2 | mailx -s "Error happened 4" [email protected] ; exit 70 fi ebs_backup_list=`echo "$ebs_backup_list_complete" | grep ^VOLUME | cut -f 2` #code to right will output list of EBS volumes to be backed up: echo -e "Now outputting ebs_backup_list:\n$ebs_backup_list" } create_EBS_Snapshot_Tags() { #snapshot tags holds all tags that need to be applied to a given snapshot - by aggregating tags we ensure that ec2-create-tags is called only onece snapshot_tags="" #if $name_tag_create is true then append ec2ab_${ebs_selected}_$date_current to the variable $snapshot_tags if $name_tag_create then ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` snapshot_tags="$snapshot_tags --tag Name=ec2ab_${ebs_selected}_$date_current" fi #if $purge_after_days is true, then append $purge_after_date to the variable $snapshot_tags if [[ -n $purge_after_days ]] then snapshot_tags="$snapshot_tags --tag PurgeAfter=$purge_after_date --tag PurgeAllow=true" fi #if $snapshot_tags is not zero length then set the tag on the snapshot using ec2-create-tags if [[ -n $snapshot_tags ]] then echo "Tagging Snapshot $ec2_snapshot_resource_id with the following Tags:" ec2-create-tags $ec2_snapshot_resource_id --region $region $snapshot_tags #echo "Snapshot tags successfully created" | mailx -s "Snapshot tags successfully created" [email protected] fi } date_command_get() { #finds full path to date binary date_binary_full_path=`which date` #command below is used to determine if date binary is gnu, macosx or other date_binary_file_result=`file -b $date_binary_full_path` case $date_binary_file_result in "Mach-O 64-bit executable x86_64") date_binary="macosx" ;; "ELF 64-bit LSB executable, x86-64, version 1 (SYSV)"*) date_binary="gnu" ;; *) date_binary="unknown" ;; esac #based on the installed date binary the case statement below will determine the method to use to determine "purge_after_days" in the future case $date_binary in gnu) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; macosx) date_command="date -v+${purge_after_days}d -u +%Y-%m-%d" ;; unknown) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; *) date_command="date -d +${purge_after_days}days -u +%Y-%m-%d" ;; esac } purge_EBS_Snapshots() { #snapshot_tag_list is a string that contains all snapshots with either the key PurgeAllow or PurgeAfter set snapshot_tag_list=`ec2-describe-tags --show-empty-fields --region $region --filter resource-type=snapshot --filter key=PurgeAllow,PurgeAfter` #snapshot_purge_allowed is a list of all snapshot_ids with PurgeAllow=true snapshot_purge_allowed=`echo "$snapshot_tag_list" | grep .*PurgeAllow'\t'true | cut -f 3` for snapshot_id_evaluated in $snapshot_purge_allowed do #gets the "PurgeAfter" date which is in UTC with YYYY-MM-DD format (or %Y-%m-%d) purge_after_date=`echo "$snapshot_tag_list" | grep .*$snapshot_id_evaluated'\t'PurgeAfter.* | cut -f 5` #if purge_after_date is not set then we have a problem. Need to alter user. if [[ -z $purge_after_date ]] #Alerts user to the fact that a Snapshot was found with PurgeAllow=true but with no PurgeAfter date. then echo "A Snapshot with the Snapshot ID $snapshot_id_evaluated has the tag \"PurgeAllow=true\" but does not have a \"PurgeAfter=YYYY-MM-DD\" date. $app_name is unable to determine if $snapshot_id_evaluated should be purged." 1>&2 | mailx -s "Error happened 5" [email protected] else #convert both the date_current and purge_after_date into epoch time to allow for comparison date_current_epoch=`date -j -f "%Y-%m-%d" "$date_current" "+%s"` purge_after_date_epoch=`date -j -f "%Y-%m-%d" "$purge_after_date" "+%s"` #perform compparison - if $purge_after_date_epoch is a lower number than $date_current_epoch than the PurgeAfter date is earlier than the current date - and the snapshot can be safely removed if [[ $purge_after_date_epoch < $date_current_epoch ]] then echo "The snapshot \"$snapshot_id_evaluated\" with the Purge After date of $purge_after_date will be deleted." ec2-delete-snapshot --region $region $snapshot_id_evaluated echo "Old snapshots successfully deleted for $volumeid" | mailx -s "Old snapshots successfully deleted for $volumeid" [email protected] fi fi done } #calls prerequisitecheck function to ensure that all executables required for script execution are available prerequisite_check app_name=`basename $0` #sets defaults selection_method="volumeid" region="ap-southeast-1" #date_binary allows a user to set the "date" binary that is installed on their system and, therefore, the options that will be given to the date binary to perform date calculations date_binary="" #sets the "Name" tag set for a snapshot to false - using "Name" requires that ec2-create-tags be called in addition to ec2-create-snapshot name_tag_create=false #sets the Purge Snapshot feature to false - this feature will eventually allow the removal of snapshots that have a "PurgeAfter" tag that is earlier than current date purge_snapshots=false #handles options processing while getopts :s:r:v:t:k:pn opt do case $opt in s) selection_method="$OPTARG";; r) region="$OPTARG";; v) volumeid="$OPTARG";; t) tag="$OPTARG";; k) purge_after_days="$OPTARG";; n) name_tag_create=true;; p) purge_snapshots=true;; *) echo "Error with Options Input. Cause of failure is most likely that an unsupported parameter was passed or a parameter was passed without a corresponding option." 1>&2 ; exit 64;; esac done #sets date variable date_current=`date -u +%Y-%m-%d` #sets the PurgeAfter tag to the number of days that a snapshot should be retained if [[ -n $purge_after_days ]] then #if the date_binary is not set, call the date_command_get function if [[ -z $date_binary ]] then date_command_get fi purge_after_date=`$date_command` echo "Snapshots taken by $app_name will be eligible for purging after the following date: $purge_after_date." fi #get_EBS_List gets a list of EBS instances for which a snapshot is desired. The list of EBS instances depends upon the selection_method that is provided by user input get_EBS_List #the loop below is called once for each volume in $ebs_backup_list - the currently selected EBS volume is passed in as "ebs_selected" for ebs_selected in $ebs_backup_list do ec2_snapshot_description="ec2ab_${ebs_selected}_$date_current" ec2_create_snapshot_result=`ec2-create-snapshot --region $region -d $ec2_snapshot_description $ebs_selected 2>&1` if [[ $? != 0 ]] then echo -e "An error occured when running ec2-create-snapshot. The error returned is below:\n$ec2_create_snapshot_result" 1>&2 ; exit 70 else ec2_snapshot_resource_id=`echo "$ec2_create_snapshot_result" | cut -f 2` echo "Snapshots successfully created for volume $volumeid" | mailx -s "Snapshots successfully created for $volumeid" [email protected] fi create_EBS_Snapshot_Tags done #if purge_snapshots is true, then run purge_EBS_Snapshots function if $purge_snapshots then echo "Snapshot Purging is Starting Now." purge_EBS_Snapshots fi cron log Oct 23 10:24:01 ip-10-130-153-227 CROND[28214]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:24:01 ip-10-130-153-227 CROND[28215]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28228]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:25:01 ip-10-130-153-227 CROND[28229]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:26:01 ip-10-130-153-227 CROND[28239]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:27:01 ip-10-130-153-227 CROND[28247]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:27:01 ip-10-130-153-227 CROND[28248]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:28:01 ip-10-130-153-227 CROND[28263]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:29:01 ip-10-130-153-227 CROND[28275]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28292]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:30:01 ip-10-130-153-227 CROND[28293]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:30:01 ip-10-130-153-227 CROND[28294]: (root) CMD (date >> /root/logs/crontab2.log) Oct 23 10:31:01 ip-10-130-153-227 CROND[28312]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:32:01 ip-10-130-153-227 CROND[28319]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28325]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:33:01 ip-10-130-153-227 CROND[28324]: (root) CMD (root (ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3;)) Oct 23 10:34:01 ip-10-130-153-227 CROND[28345]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28362]: (root) CMD (date >> /root/logs/crontab.log;) Oct 23 10:35:01 ip-10-130-153-227 CROND[28363]: (root) CMD (date >> /root/logs/crontab2.log) Mails to root From [email protected] Tue Oct 23 06:00:01 2012 Return-Path: <[email protected]> Date: Tue, 23 Oct 2012 06:00:01 GMT From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <root@ip-10-130-153-227> root ec2-automate-backup.sh -v "vol-fb2fbcdf" -k 3 Content-Type: text/plain; charset=UTF-8 Auto-Submitted: auto-generated X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <HOME=/root> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=root> X-Cron-Env: <USER=root> Status: R /bin/sh: root: command not found

    Read the article

  • C# Confusing Results from Performance Test

    - by aip.cd.aish
    I am currently working on an image processing application. The application captures images from a webcam and then does some processing on it. The app needs to be real time responsive (ideally < 50ms to process each request). I have been doing some timing tests on the code I have and I found something very interesting (see below). clearLog(); log("Log cleared"); camera.QueryFrame(); camera.QueryFrame(); log("Camera buffer cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); Each time the log is called the time since the beginning of the processing is displayed. Here is my log output: [3 ms]Log cleared [41 ms]Camera buffer cleared [41 ms]Sx: 589 Sy: 414 [112 ms]Camera output acuired for processing The timings are computed using a StopWatch from System.Diagonostics. QUESTION 1 I find this slightly interesting, since when the same method is called twice it executes in ~40ms and when it is called once the next time it took longer (~70ms). Assigning the value can't really be taking that long right? QUESTION 2 Also the timing for each step recorded above varies from time to time. The values for some steps are sometimes as low as 0ms and sometimes as high as 100ms. Though most of the numbers seem to be relatively consistent. I guess this may be because the CPU was used by some other process in the mean time? (If this is for some other reason, please let me know) Is there some way to ensure that when this function runs, it gets the highest priority? So that the speed test results will be consistently low (in terms of time). EDIT I change the code to remove the two blank query frames from above, so the code is now: clearLog(); log("Log cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); The timing results are now: [2 ms]Log cleared [3 ms]Sx: 589 Sy: 414 [5 ms]Camera output acuired for processing The next steps now take longer (sometimes, the next step jumps to after 20-30ms, while the next step was previously almost instantaneous). I am guessing this is due to the CPU scheduling. Is there someway I can ensure the CPU does not get scheduled to do something else while it is running through this code?

    Read the article

  • Working with hibernate/DAO problems

    - by Gandalf StormCrow
    Hello everyone here is my DAO class : public class UsersDAO extends HibernateDaoSupport { private static final Log log = LogFactory.getLog(UsersDAO.class); protected void initDao() { //do nothing } public void save(User transientInstance) { log.debug("saving Users instance"); try { getHibernateTemplate().saveOrUpdate(transientInstance); log.debug("save successful"); } catch (RuntimeException re) { log.error("save failed", re); throw re; } } public void update(User transientInstance) { log.debug("updating User instance"); try { getHibernateTemplate().update(transientInstance); log.debug("update successful"); } catch (RuntimeException re) { log.error("update failed", re); throw re; } } public void delete(User persistentInstance) { log.debug("deleting Users instance"); try { getHibernateTemplate().delete(persistentInstance); log.debug("delete successful"); } catch (RuntimeException re) { log.error("delete failed", re); throw re; } } public User findById( java.lang.Integer id) { log.debug("getting Users instance with id: " + id); try { User instance = (User) getHibernateTemplate() .get("project.hibernate.Users", id); return instance; } catch (RuntimeException re) { log.error("get failed", re); throw re; } } } Now I wrote a test class(not a junit test) to test is everything working, my user has these fields in the database : userID which is 5characters long string and unique/primary key, and fields such as address, dob etc(total 15 columns in database table). Now in my test class I intanciated User added the values like : User user = new User; user.setAddress("some address"); and so I did for all 15 fields, than at the end of assigning data to User object I called in DAO to save that to database UsersDao.save(user); and save works just perfectly. My question is how do I update/delete users using the same logic? Fox example I tried this(to delete user from table users): User user = new User; user.setUserID("1s54f"); // which is unique key for users no two keys are the same UsersDao.delete(user); I wanted to delete user with this key but its obviously different can someone explain please how to do these. thank you

    Read the article

  • meteor mongodb _id changing after insert (and UUID property as well)

    - by lommaj
    I have meteor method that does an insert. Im using Regulate.js for form validation. I set the game_id field to Meteor.uuid() to create a unique value that I also route to /game_show/:game_id using iron router. As you can see I'm logging the details of the game, this works fine. (image link to log below) Meteor.methods({ create_game_form : function(data){ Regulate.create_game_form.validate(data, function (error, data) { if (error) { console.log('Server side validation failed.'); } else { console.log('Server side validation passed!'); // Save data to database or whatever... //console.log(data[0].value); var new_game = { game_id: Meteor.uuid(), name : data[0].value, game_type: data[1].value, creator_user_id: Meteor.userId(), user_name: Meteor.user().profile.name, created: new Date() }; console.log("NEW GAME BEFORE INSERT: ", new_game); GamesData.insert(new_game, function(error, new_id){ console.log("GAMES NEW MONGO ID: ", new_id) var game_data = GamesData.findOne({_id: new_id}); console.log('NEW GAME AFTER INSERT: ', game_data); Session.set('CURRENT_GAME', game_data); }); } }); } }); All of the data coming out of the console.log at this point works fine After this method call the client routes to /game_show/:game_id Meteor.call('create_game_form', data, function(error){ if(error){ return alert(error.reason); } //console.log("post insert data for routing variable " ,data); var created_game = Session.get('CURRENT_GAME'); console.log("Session Game ", created_game); Router.go('game_show', {game_id: created_game.game_id}); }); On this view, I try to load the document with the game_id I just inserted Template.game_start.helpers({ game_info: function(){ console.log(this.game_id); var game_data = GamesData.find({game_id: this.game_id}); console.log("trying to load via UUID ", game_data); return game_data; } }); sorry cant upload images... :-( https://www.evernote.com/shard/s21/sh/c07e8047-de93-4d08-9dc7-dae51668bdec/a8baf89a09e55f8902549e79f136fd45 As you can see from the image of the console log below, everything matches the id logged before insert the id logged in the insert callback using findOne() the id passed in the url However the mongo ID and the UUID I inserted ARE NOT THERE, the only document in there has all the other fields matching except those two! Not sure what im doing wrong. Thanks!

    Read the article

  • MySQL server has gone away

    - by user491992
    Hello Friends, I executed this query on my MySql Server and it is giving me "MySQL server has gone away" Error.In following query my both table have more then 1000000 rows. SELECT a_tab_11_10.url as url,a_tab_11_10.c5 as 't1',a_tab_12_10.c3 as 't2' FROM a_tab_11_10 join a_tab_12_10 on (a_tab_11_10.url)=(a_tab_12_10.url) order by (a_tab_11_10.c5-a_tab_12_10.c3) desc limit 10 here is my log file but i am not getting it. Thank you @Faisal for answer and i check my log file but i am not getting it.. 110111 10:19:50 [Note] Plugin 'FEDERATED' is disabled. 110111 10:19:51 InnoDB: Started; log sequence number 0 945537221 110111 10:19:51 [Note] Event Scheduler: Loaded 0 events 110111 10:19:51 [Note] wampmysqld: ready for connections. Version: '5.1.36-community-log' socket: '' port: 3306 MySQL Community Server (GPL) 110111 12:35:42 [Note] wampmysqld: Normal shutdown 110111 12:35:43 [Note] Event Scheduler: Purging the queue. 0 events 110111 12:35:43 InnoDB: Starting shutdown... 110111 12:35:45 InnoDB: Shutdown completed; log sequence number 0 945538624 110111 12:35:45 [Warning] Forcing shutdown of 1 plugins 110111 12:35:45 [Note] wampmysqld: Shutdown complete 110111 12:36:39 [Note] Plugin 'FEDERATED' is disabled. 110111 12:36:40 InnoDB: Started; log sequence number 0 945538624 110111 12:36:40 [Note] Event Scheduler: Loaded 0 events 110111 12:36:40 [Note] wampmysqld: ready for connections. Version: '5.1.36-community-log' socket: '' port: 3306 MySQL Community Server (GPL) 110111 12:36:40 [Note] wampmysqld: Normal shutdown 110111 12:36:40 [Note] Event Scheduler: Purging the queue. 0 events 110111 12:36:40 InnoDB: Starting shutdown... 110111 12:36:42 InnoDB: Shutdown completed; log sequence number 0 945538634 110111 12:36:42 [Warning] Forcing shutdown of 1 plugins 110111 12:36:42 [Note] wampmysqld: Shutdown complete 110111 12:36:52 [Note] Plugin 'FEDERATED' is disabled. 110111 12:36:52 InnoDB: Started; log sequence number 0 945538634 110111 12:36:52 [Note] Event Scheduler: Loaded 0 events 110111 12:36:52 [Note] wampmysqld: ready for connections. Version: '5.1.36-community-log' socket: '' port: 3306 MySQL Community Server (GPL) 110111 12:37:42 [Note] wampmysqld: Normal shutdown 110111 12:37:42 [Note] Event Scheduler: Purging the queue. 0 events 110111 12:37:42 InnoDB: Starting shutdown... 110111 12:37:43 InnoDB: Shutdown completed; log sequence number 0 945538634 110111 12:37:43 [Warning] Forcing shutdown of 1 plugins 110111 12:37:43 [Note] wampmysqld: Shutdown complete 110111 12:37:46 [Note] Plugin 'FEDERATED' is disabled. 110111 12:37:46 InnoDB: Started; log sequence number 0 945538634 110111 12:37:46 [Note] Event Scheduler: Loaded 0 events 110111 12:37:46 [Note] wampmysqld: ready for connections. Version: '5.1.36-community-log' socket: '' port: 3306 MySQL Community Server (GPL)

    Read the article

  • Cisco 800 series won't forward port

    - by sam
    Hello ServerFault, I am trying to forward port 444 from my cisco router to my Web Server (192.168.0.2). As far as I can tell, my port forwarding is configured correctly, yet no traffic will pass through on port 444. Here is my config: ! version 12.3 service config no service pad service tcp-keepalives-in service tcp-keepalives-out service timestamps debug uptime service timestamps log uptime service password-encryption no service dhcp ! hostname QUESTMOUNT ! logging buffered 16386 informational logging rate-limit 100 except warnings no logging console no logging monitor enable secret 5 -removed- ! username administrator secret 5 -removed- username manager secret 5 -removed- clock timezone NZST 12 clock summer-time NZDT recurring 1 Sun Oct 2:00 3 Sun Mar 3:00 aaa new-model ! ! aaa authentication login default local aaa authentication login userlist local aaa authentication ppp default local aaa authorization network grouplist local aaa session-id common ip subnet-zero no ip source-route no ip domain lookup ip domain name quest.local ! ! no ip bootp server ip inspect name firewall tcp ip inspect name firewall udp ip inspect name firewall cuseeme ip inspect name firewall h323 ip inspect name firewall rcmd ip inspect name firewall realaudio ip inspect name firewall streamworks ip inspect name firewall vdolive ip inspect name firewall sqlnet ip inspect name firewall tftp ip inspect name firewall ftp ip inspect name firewall icmp ip inspect name firewall sip ip inspect name firewall fragment maximum 256 timeout 1 ip inspect name firewall netshow ip inspect name firewall rtsp ip inspect name firewall skinny ip inspect name firewall http ip audit notify log ip audit po max-events 100 ip audit name intrusion info list 3 action alarm ip audit name intrusion attack list 3 action alarm drop reset no ftp-server write-enable ! ! ! ! crypto isakmp policy 1 authentication pre-share ! crypto isakmp policy 2 encr 3des authentication pre-share group 2 ! crypto isakmp client configuration group staff key 0 qS;,sc:q<skro1^, domain quest.local pool vpnclients acl 106 ! ! crypto ipsec transform-set tr-null-sha esp-null esp-sha-hmac crypto ipsec transform-set tr-des-md5 esp-des esp-md5-hmac crypto ipsec transform-set tr-des-sha esp-des esp-sha-hmac crypto ipsec transform-set tr-3des-sha esp-3des esp-sha-hmac ! crypto dynamic-map vpnusers 1 description Client to Site VPN Users set transform-set tr-des-md5 ! ! crypto map cm-cryptomap client authentication list userlist crypto map cm-cryptomap isakmp authorization list grouplist crypto map cm-cryptomap client configuration address respond crypto map cm-cryptomap 65000 ipsec-isakmp dynamic vpnusers ! ! ! ! interface Ethernet0 ip address 192.168.0.254 255.255.255.0 ip access-group 102 in ip nat inside hold-queue 100 out ! interface ATM0 no ip address no atm ilmi-keepalive dsl operating-mode auto ! interface ATM0.1 point-to-point pvc 0/100 encapsulation aal5mux ppp dialer dialer pool-member 1 ! ! interface Dialer0 bandwidth 640 ip address negotiated ip access-group 101 in no ip redirects no ip unreachables ip nat outside ip inspect firewall out ip audit intrusion in encapsulation ppp no ip route-cache no ip mroute-cache dialer pool 1 dialer-group 1 no cdp enable ppp pap sent-username -removed- password 7 -removed- ppp ipcp dns request crypto map cm-cryptomap ! ip local pool vpnclients 192.168.99.1 192.168.99.254 ip nat inside source list 105 interface Dialer0 overload ip nat inside source static tcp 192.168.0.2 444 interface Dialer0 444 ip nat inside source static tcp 192.168.0.51 9000 interface Dialer0 9000 ip nat inside source static udp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 1433 interface Dialer0 1433 ip nat inside source static tcp 192.168.0.2 25 interface Dialer0 25 ip classless ip route 0.0.0.0 0.0.0.0 Dialer0 ip http server no ip http secure-server ! ip access-list logging interval 10 logging 192.168.0.2 access-list 1 remark The local LAN. access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.0.0 access-list 2 remark Where management can be done from. access-list 2 permit 192.168.0.0 0.0.0.255 access-list 3 remark Traffic not to check for intrustion detection. access-list 3 deny 192.168.99.0 0.0.0.255 access-list 3 permit any access-list 101 remark Traffic allowed to enter the router from the Internet access-list 101 permit ip 192.168.99.0 0.0.0.255 192.168.0.0 0.0.0.255 access-list 101 deny ip 0.0.0.0 0.255.255.255 any access-list 101 deny ip 10.0.0.0 0.255.255.255 any access-list 101 deny ip 127.0.0.0 0.255.255.255 any access-list 101 deny ip 169.254.0.0 0.0.255.255 any access-list 101 deny ip 172.16.0.0 0.15.255.255 any access-list 101 deny ip 192.0.2.0 0.0.0.255 any access-list 101 deny ip 192.168.0.0 0.0.255.255 any access-list 101 deny ip 198.18.0.0 0.1.255.255 any access-list 101 deny ip 224.0.0.0 0.15.255.255 any access-list 101 deny ip any host 255.255.255.255 access-list 101 permit tcp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit tcp host 120.136.2.22 any eq 1433 access-list 101 permit tcp host 123.100.90.58 any eq 1433 access-list 101 permit udp 67.228.209.128 0.0.0.15 any eq 1433 access-list 101 permit udp host 120.136.2.22 any eq 1433 access-list 101 permit udp host 123.100.90.58 any eq 1433 access-list 101 permit tcp any any eq 444 access-list 101 permit tcp any any eq 9000 access-list 101 permit tcp any any eq smtp access-list 101 permit udp any any eq non500-isakmp access-list 101 permit udp any any eq isakmp access-list 101 permit esp any any access-list 101 permit tcp any any eq 1723 access-list 101 permit gre any any access-list 101 permit tcp any any eq 22 access-list 101 permit tcp any any eq telnet access-list 102 remark Traffic allowed to enter the router from the Ethernet access-list 102 permit ip any host 192.168.0.254 access-list 102 deny ip any host 192.168.0.255 access-list 102 deny udp any any eq tftp log access-list 102 permit ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 102 deny ip any 0.0.0.0 0.255.255.255 log access-list 102 deny ip any 10.0.0.0 0.255.255.255 log access-list 102 deny ip any 127.0.0.0 0.255.255.255 log access-list 102 deny ip any 169.254.0.0 0.0.255.255 log access-list 102 deny ip any 172.16.0.0 0.15.255.255 log access-list 102 deny ip any 192.0.2.0 0.0.0.255 log access-list 102 deny ip any 192.168.0.0 0.0.255.255 log access-list 102 deny ip any 198.18.0.0 0.1.255.255 log access-list 102 deny udp any any eq 135 log access-list 102 deny tcp any any eq 135 log access-list 102 deny udp any any eq netbios-ns log access-list 102 deny udp any any eq netbios-dgm log access-list 102 deny tcp any any eq 445 log access-list 102 permit ip 192.168.0.0 0.0.0.255 any access-list 102 permit ip any host 255.255.255.255 access-list 102 deny ip any any log access-list 105 remark Traffic to NAT access-list 105 deny ip 192.168.0.0 0.0.0.255 192.168.99.0 0.0.0.255 access-list 105 permit ip 192.168.0.0 0.0.0.255 any access-list 106 remark User to Site VPN Clients access-list 106 permit ip 192.168.0.0 0.0.0.255 any dialer-list 1 protocol ip permit ! line con 0 no modem enable line aux 0 line vty 0 4 access-class 2 in transport input telnet ssh transport output none ! scheduler max-task-time 5000 ! end any ideas? :)

    Read the article

  • Is post-sudden-power-loss filesystem corruption on an SSD drive's ext3 partition "expected behavior"?

    - by Jeremy Friesner
    My company makes an embedded Debian Linux device that boots from an ext3 partition on an internal SSD drive. Because the device is an embedded "black box", it is usually shut down the rude way, by simply cutting power to the device via an external switch. This is normally okay, as ext3's journalling keeps things in order, so other than the occasional loss of part of a log file, things keep chugging along fine. However, we've recently seen a number of units where after a number of hard-power-cycles the ext3 partition starts to develop structural issues -- in particular, we run e2fsck on the ext3 partition and it finds a number of issues like those shown in the output listing at the bottom of this Question. Running e2fsck until it stops reporting errors (or reformatting the partition) clears the issues. My question is... what are the implications of seeing problems like this on an ext3/SSD system that has been subjected to lots of sudden/unexpected shutdowns? My feeling is that this might be a sign of a software or hardware problem in our system, since my understanding is that (barring a bug or hardware problem) ext3's journalling feature is supposed to prevent these sorts of filesystem-integrity errors. (Note: I understand that user-data is not journalled and so munged/missing/truncated user-files can happen; I'm specifically talking here about filesystem-metadata errors like those shown below) My co-worker, on the other hand, says that this is known/expected behavior because SSD controllers sometimes re-order write commands and that can cause the ext3 journal to get confused. In particular, he believes that even given normally functioning hardware and bug-free software, the ext3 journal only makes filesystem corruption less likely, not impossible, so we should not be surprised to see problems like this from time to time. Which of us is right? Embedded-PC-failsafe:~# ls Embedded-PC-failsafe:~# umount /mnt/unionfs Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Invalid inode number for '.' in directory inode 46948. Fix<y>? yes Directory inode 46948, block 0, offset 12: directory corrupted Salvage<y>? yes Entry 'status_2012-11-26_14h13m41.csv' in /var/log/status_logs (46956) has deleted/unused inode 47075. Clear<y>? yes Entry 'status_2012-11-26_10h42m58.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47076. Clear<y>? yes Entry 'status_2012-11-26_11h29m41.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47080. Clear<y>? yes Entry 'status_2012-11-26_11h42m13.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47081. Clear<y>? yes Entry 'status_2012-11-26_12h07m17.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47083. Clear<y>? yes Entry 'status_2012-11-26_12h14m53.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47085. Clear<y>? yes Entry 'status_2012-11-26_15h06m49.csv' in /var/log/status_logs (46956) has deleted/unused inode 47088. Clear<y>? yes Entry 'status_2012-11-20_14h50m09.csv' in /var/log/status_logs (46956) has deleted/unused inode 47073. Clear<y>? yes Entry 'status_2012-11-20_14h55m32.csv' in /var/log/status_logs (46956) has deleted/unused inode 47074. Clear<y>? yes Entry 'status_2012-11-26_11h04m36.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47078. Clear<y>? yes Entry 'status_2012-11-26_11h54m45.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47082. Clear<y>? yes Entry 'status_2012-11-26_12h12m20.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47084. Clear<y>? yes Entry 'status_2012-11-26_12h33m52.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47086. Clear<y>? yes Entry 'status_2012-11-26_10h51m59.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47077. Clear<y>? yes Entry 'status_2012-11-26_11h17m09.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47079. Clear<y>? yes Entry 'status_2012-11-26_12h54m11.csv.gz' in /var/log/status_logs (46956) has deleted/unused inode 47087. Clear<y>? yes Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Couldn't fix parent of inode 46948: Couldn't find parent directory entry Pass 4: Checking reference counts Unattached inode 46945 Connect to /lost+found<y>? yes Inode 46945 ref count is 2, should be 1. Fix<y>? yes Inode 46953 ref count is 5, should be 4. Fix<y>? yes Pass 5: Checking group summary information Block bitmap differences: -(208264--208266) -(210062--210068) -(211343--211491) -(213241--213250) -(213344--213393) -213397 -(213457--213463) -(213516--213521) -(213628--213655) -(213683--213688) -(213709--213728) -(215265--215300) -(215346--215365) -(221541--221551) -(221696--221704) -227517 Fix<y>? yes Free blocks count wrong for group #6 (17247, counted=17611). Fix<y>? yes Free blocks count wrong (161691, counted=162055). Fix<y>? yes Inode bitmap differences: +(47089--47090) +47093 +47095 +(47097--47099) +(47101--47104) -(47219--47220) -47222 -47224 -47228 -47231 -(47347--47348) -47350 -47352 -47356 -47359 -(47457--47488) -47985 -47996 -(47999--48000) -48017 -(48027--48028) -(48030--48032) -48049 -(48059--48060) -(48062--48064) -48081 -(48091--48092) -(48094--48096) Fix<y>? yes Free inodes count wrong for group #6 (7608, counted=7624). Fix<y>? yes Free inodes count wrong (61919, counted=61935). Fix<y>? yes embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: ********** WARNING: Filesystem still has errors ********** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite contains a file system with errors, check forced. Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Directory entry for '.' in ... (46948) is big. Split<y>? yes Missing '..' in directory inode 46948. Fix<y>? yes Setting filetype for entry '..' in ... (46948) to 2. Pass 3: Checking directory connectivity '..' in /etc/network/run (46948) is <The NULL inode> (0), should be /etc/network (46953). Fix<y>? yes Pass 4: Checking reference counts Inode 2 ref count is 12, should be 13. Fix<y>? yes Pass 5: Checking group summary information embeddedrootwrite: ***** FILE SYSTEM WAS MODIFIED ***** embeddedrootwrite: 657/62592 files (24.4% non-contiguous), 87882/249937 blocks Embedded-PC-failsafe:~# Embedded-PC-failsafe:~# e2fsck /dev/sda3 e2fsck 1.41.3 (12-Oct-2008) embeddedrootwrite: clean, 657/62592 files, 87882/249937 blocks

    Read the article

  • Kubuntu login problem

    - by Balázs Mária Németh
    I have a problem when trying to login to my Kubuntu 10.10. The login screen shows up, then I type my password, a blank screen is shown with my desktop background picture and then throws me back on the login screen. if I choose to login from console, I can do that, and by typing startx I can log in just fine but in the end I cannot shutdown the computer from the k menu nor will my settings remembered for the next time I log back. I have my home directory mounted from a different partition but I tried to create a new user account and I could log in without any kind of problem. The home directory is not encrypted, at least not to my knowledge. Some log files: Xorg.0.log dmesg.out kdm.log xsession.errors

    Read the article

  • Kubuntu login problem

    - by Balázs Mária Németh
    I have a problem when trying to login to my Kubuntu 10.10. The login screen shows up, then I type my password, a blank screen is shown with my desktop background picture and then throws me back on the login screen. if I choose to login from console, I can do that, and by typing startx I can log in just fine but in the end I cannot shutdown the computer from the k menu nor will my settings remembered for the next time I log back. I have my home directory mounted from a different partition but I tried to create a new user account and I could log in without any kind of problem. The home directory is not encrypted, at least not to my knowledge. Some log files: Xorg.0.log dmesg.out kdm.log xsession.errors

    Read the article

  • determine collision angle on a rotating body

    - by jorb
    update: new diagram and updated description I have a contact listener set up to try and determine the side that a collision happened at relative to the a bodies rotation. One way to solve this is to find the value of the yellow angle between the red and blue vectors drawn above. The angle can be found by taking the arc cosine of the dot product of the two vectors (Evan pointed this out). One of my points of confusion is the difference in domain of the atan2 function html canvas coordinates and the Box2d rotation information. I know I have to account for this somehow... SS below questions: Does Box2D provide these angles more directly in the collision information? Am I even on the right track? If so, any hints? I have the following javascript so far: Ship.prototype.onCollide = function (other_ent,cx,cy) { var pos = this.body.GetPosition(); //collision position relative to body var d_cx = pos.x - cx; var d_cy = pos.y - cy; //length of initial vector var len = Math.sqrt(Math.pow(pos.x -cx,2) + Math.pow(pos.y-cy,2)); //body angle - can over rotate hence mod 2*Pi var ang = this.body.GetAngle() % (Math.PI * 2); //vector representing body's angle - same magnitude as the first var b_vx = len * Math.cos(ang); var b_vy = len * Math.sin(ang); //dot product of the two vectors var dot_prod = d_cx * b_vx + d_cy * b_vy; //new calculation of difference in angle - NOT WORKING! var d_ang = Math.acos(dot_prod); var side; if (Math.abs(d_ang) < Math.PI/2 ) side = "front"; else side = "back"; console.log("length",len); console.log("pos:",pos.x,pos.y); console.log("offs:",d_cx,d_cy); console.log("body vec",b_vx,b_vy); console.log("body angle:",ang); console.log("dot product",dot_prod); console.log("result:",d_ang); console.log("side",side); console.log("------------------------"); }

    Read the article

  • Languages like Tcl that have configurable syntax?

    - by boost
    I'm looking for a language that will let me do what I could do with Clipper years ago, and which I can do with Tcl, namely add functionality in a way other than just adding functions. For example in Clipper/(x)Harbour there are commands #command, #translate, #xcommand and #xtranslate that allow things like this: #xcommand REPEAT; => DO WHILE .T. #xcommand UNTIL <cond>; => IF (<cond>); ;EXIT; ;ENDIF; ;ENDDO LOCAL n := 1 REPEAT n := n + 1 UNTIL n > 100 Similarly, in Tcl I'm doing proc process_range {_for_ project _from_ dat1 _to_ dat2 _by_ slice} { set fromDate [clock scan $dat1] set toDate [clock scan $dat2] if {$slice eq "day"} then {set incrementor [expr 24 * 60]} if {$slice eq "hour"} then {set incrementor 60} set method DateRange puts "Scanning from [clock format $fromDate -format "%c"] to [clock format $toDate -format "%c"] by $slice" for {set dateCursor $fromDate} {$dateCursor <= $toDate} {set dateCursor [clock add $dateCursor $incrementor minutes]} { # ... } } process_range for "client" from "2013-10-18 00:00" to "2013-10-20 23:59" by day Are there any other languages that permit this kind of, almost COBOL-esque, syntax modification? If you're wondering why I'm asking, it's for setting up stuff so that others with a not-as-geeky-as-I-am skillset can declare processing tasks.

    Read the article

  • What's the canonical way to acknowledge many FOSS sources in a single project?

    - by boost
    I have a project which uses a large number of LGPL, Artistic and other open-source licensed libraries. What's the canonical (i.e. the "standard") way of acknowledging multiple sources in a single project download? Also, some of the sources I've used are from sites where using the code is okay, but publishing the source isn't. What's the usual manner of attribution in that case, and the usual manner of making the source available in an open-source project?

    Read the article

  • What's the best way to acknowledge many FOSS sources in a single project?

    - by boost
    I have a project which uses a large number of LGPL, Artistic and other open-source licensed libraries. What's the canonical (i.e. the "standard") way of acknowledging multiple sources in a single project download? Also, some of the sources I've used are from sites where using the code is okay, but publishing the source isn't. What's the usual manner of attribution in that case, and the usual manner of making the source available in an open-source project?

    Read the article

  • Avoid overwriting of logs

    - by Koppar
    What usually happens is, the logs get filled up and begin getting overwritten, which makes them useless. To avoid it, use these 2 properties in the logging.properties file to suit your requirement: java.util.logging.FileHandler.count  = x (it is 1 by default, increase it to a bigger value) This number specifies the number of log files that can be created before overwriting starts. For instance, if you set it to 5, java0.log, java1.log ... java5.log will be created to log details so more information can be captured Likewise, java.util.logging.FileHandler.limit  would specify the size of each log.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >