Search Results

Search found 3296 results on 132 pages for 'executable compression'.

Page 122/132 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • How do I use WMI with Delphi without drastically increasing the application's file size?

    - by Mick
    I am using Delphi 2010, and when I created a console application that prints "Hello World", it takes 111 kb. If I want to query WMI with Delphi, I add WBEMScripting_TLB, ActiveX, and Variants units to my project. If I perform a simple WMI query, my executable size jumps to 810 kb. I Is there anyway to query WMI without such a large addition to the size of the file? Forgive my ignorance, but why do I not have this issue with C++? Here is my code: program WMITest; {$APPTYPE CONSOLE} uses SysUtils, WBEMScripting_TLB, ActiveX, Variants; function GetWMIstring(wmiHost, root, wmiClass, wmiProperty: string): string; var Services: ISWbemServices; SObject: ISWbemObject; ObjSet: ISWbemObjectSet; SProp: ISWbemProperty; Enum: IEnumVariant; Value: Cardinal; TempObj: OLEVariant; loc: TSWbemLocator; SN: string; i: integer; begin Result := ''; i := 0; try loc := TSWbemLocator.Create(nil); Services := Loc.ConnectServer(wmiHost, root {'root\cimv2'}, '', '', '', '', 0, nil); ObjSet := Services.ExecQuery('SELECT * FROM ' + wmiClass, 'WQL', wbemFlagReturnImmediately and wbemFlagForwardOnly, nil); Enum := (ObjSet._NewEnum) as IEnumVariant; if not VarIsNull(Enum) then try while Enum.Next(1, TempObj, Value) = S_OK do begin try SObject := IUnknown(TempObj) as ISWBemObject; except SObject := nil; end; TempObj := Unassigned; if SObject <> nil then begin SProp := SObject.Properties_.Item(wmiProperty, 0); SN := SProp.Get_Value; if not VarIsNull(SN) then begin if varisarray(SN) then begin for i := vararraylowbound(SN, 1) to vararrayhighbound(SN, 1) do result := vartostr(SN[i]); end else Result := SN; Break; end; end; end; SProp := nil; except Result := ''; end else Result := ''; Enum := nil; Services := nil; ObjSet := nil; except on E: Exception do Result := e.message; end; end; begin try WriteLn('hello world'); WriteLn(GetWMIstring('.', 'root\CIMV2', 'Win32_OperatingSystem', 'Caption')); WriteLn('done'); except on E: Exception do Writeln(E.ClassName, ': ', E.Message); end; end.

    Read the article

  • Add objects to association in OnPreInsert, OnPreUpdate

    - by Dmitriy Nagirnyak
    Hi, I have an event listener (for Audit Logs) which needs to append audit log entries to the association of the object: public Company : IAuditable { // Other stuff removed for bravety IAuditLog IAuditable.CreateEntry() { var entry = new CompanyAudit(); this.auditLogs.Add(entry); return entry; } public virtual IEnumerable<CompanyAudit> AuditLogs { get { return this.auditLogs } } } The AuditLogs collection is mapped with cascading: public class CompanyMap : ClassMap<Company> { public CompanyMap() { // Id and others removed fro bravety HasMany(x => x.AuditLogs).AsSet() .LazyLoad() .Access.ReadOnlyPropertyThroughCamelCaseField() .Cascade.All(); } } And the listener just asks the auditable object to create log entries so it can update them: internal class AuditEventListener : IPreInsertEventListener, IPreUpdateEventListener { public bool OnPreUpdate(PreUpdateEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } public bool OnPreInsert(PreInsertEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } private static void LogProperty(IAuditable auditable) { var entry = auditable.CreateAuditEntry(); entry.CreatedAt = DateTime.Now; entry.Who = GetCurrentUser(); // Might potentially execute a query. // Also other information is set for entry here } } The problem with it though is that it throws TransientObjectException when commiting the transaction: NHibernate.TransientObjectException : object references an unsaved transient instance - save the transient instance before flushing. Type: CompanyAudit, Entity: CompanyAudit at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session) at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session) at NHibernate.Type.ManyToOneType.NullSafeSet(IDbCommand st, Object value, Int32 index, Boolean[] settable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.WriteElement(IDbCommand st, Object elt, Int32 i, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.PerformInsert(Object ownerId, IPersistentCollection collection, IExpectation expectation, Object entry, Int32 index, Boolean useBatch, Boolean callable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.Recreate(IPersistentCollection collection, Object id, ISessionImplementor session) at NHibernate.Action.CollectionRecreateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() As the cascading is set to All I expected NH to handle this. I also tried to modify the collection using state but pretty much the same happens. So the question is what is the last chance to modify object's associations before it gets saved? Thanks, Dmitriy.

    Read the article

  • How do I work around this problem creating a virtualenv environment with a custom-build Python?

    - by Daryl Spitzer
    I need to run some code on a Linux machine with Python 2.3.4 pre-installed. I'm not on the sudoers list for that machine, so I built Python 2.6.4 into (a subdirectory in) my home directory. Then I attempted to use virtualenv (for the first time), but got: $ Python-2.6.4/python virtualenv/virtualenv.py ENV New python executable in ENV/bin/python Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Installing setuptools......... Complete output from command /apps/users/dspitzer/ENV/bin/python -c "#!python \"\"\"Bootstrap setuptoo... " /apps/users/dspitzer/virtualen...6.egg: Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] 'import site' failed; use -v for traceback Traceback (most recent call last): File "<string>", line 67, in <module> ImportError: No module named md5 ---------------------------------------- ...Installing setuptools...done. Traceback (most recent call last): File "virtualenv/virtualenv.py", line 1488, in <module> main() File "virtualenv/virtualenv.py", line 529, in main use_distribute=options.use_distribute) File "virtualenv/virtualenv.py", line 619, in create_environment install_setuptools(py_executable, unzip=unzip_setuptools) File "virtualenv/virtualenv.py", line 361, in install_setuptools _install_req(py_executable, unzip) File "virtualenv/virtualenv.py", line 337, in _install_req cwd=cwd) File "virtualenv/virtualenv.py", line 590, in call_subprocess % (cmd_desc, proc.returncode)) OSError: Command /apps/users/dspitzer/ENV/bin/python -c "#!python \"\"\"Bootstrap setuptoo... " /apps/users/dspitzer/virtualen...6.egg failed with error code 1 Should I be setting PYTHONHOME to some value? (I intentionally named my ENV "ENV" for lack of a better name.) Not knowing if I can ignore those errors, I tried installing nose (0.11.1) into my ENV: $ cd nose-0.11.1/ $ ls AUTHORS doc/ lgpl.txt nose.egg-info/ selftest.py* bin/ examples/ MANIFEST.in nosetests.1 setup.cfg build/ functional_tests/ NEWS PKG-INFO setup.py CHANGELOG install-rpm.sh* nose/ README.txt unit_tests/ $ ~/ENV/bin/python setup.py install Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "setup.py", line 1, in <module> from nose import __version__ as VERSION File "/apps/users/dspitzer/nose-0.11.1/nose/__init__.py", line 1, in <module> from nose.core import collector, main, run, run_exit, runmodule File "/apps/users/dspitzer/nose-0.11.1/nose/core.py", line 3, in <module> from __future__ import generators ImportError: No module named __future__ Any advice?

    Read the article

  • Can't install thin by using rubygems on Ubuntu 9.10

    - by skyfive
    How can I fix this error, and install thin or other gems? $ sudo gem install thin Building native extensions. This could take a while... ERROR: Error installing thin: ERROR: Failed to build gem native extension. /usr/bin/ruby1.9.1 extconf.rb checking for rb_trap_immediate in ruby.h,rubysig.h... *** extconf.rb failed *** Could not create Makefile due to some reason, probably lack of necessary libraries and/or headers. Check the mkmf.log file for more details. You may need configuration options. Provided configuration options: --with-opt-dir --without-opt-dir --with-opt-include --without-opt-include=${opt-dir}/include --with-opt-lib --without-opt-lib=${opt-dir}/lib --with-make-prog --without-make-prog --srcdir=. --curdir --ruby=/usr/bin/ruby1.9.1 /usr/lib/ruby/1.9.1/mkmf.rb:362:in `try_do': The complier failed to generate an executable file. (RuntimeError) You have to install development tools first. from /usr/lib/ruby/1.9.1/mkmf.rb:425:in `try_compile' from /usr/lib/ruby/1.9.1/mkmf.rb:543:in `try_var' from /usr/lib/ruby/1.9.1/mkmf.rb:791:in `block in have_var' from /usr/lib/ruby/1.9.1/mkmf.rb:668:in `block in checking_for' from /usr/lib/ruby/1.9.1/mkmf.rb:274:in `block (2 levels) in postpone' from /usr/lib/ruby/1.9.1/mkmf.rb:248:in `open' from /usr/lib/ruby/1.9.1/mkmf.rb:274:in `block in postpone' from /usr/lib/ruby/1.9.1/mkmf.rb:248:in `open' from /usr/lib/ruby/1.9.1/mkmf.rb:270:in `postpone' from /usr/lib/ruby/1.9.1/mkmf.rb:667:in `checking_for' from /usr/lib/ruby/1.9.1/mkmf.rb:790:in `have_var' from extconf.rb:16:in `' Gem files will remain installed in /var/lib/gems/1.9.1/gems/eventmachine-0.12.10 for inspection. Results logged to /var/lib/gems/1.9.1/gems/eventmachine-0.12.10/ext/gem_make.out Addtional Infomation as below $ cat /etc/issue Ubuntu 9.10 \n \l $ dpkg -l | grep ruby ii libreadline-ruby1.9.1 1.9.1.243-2 Readline interface for Ruby 1.9.1 ii libruby1.9.1 1.9.1.243-2 Libraries necessary to run Ruby 1.9.1 ii ruby1.9.1 1.9.1.243-2 Interpreter of object-oriented scripting lan ii ruby1.9.1-dev 1.9.1.243-2 Header files for compiling extension modules ii rubygems1.9.1 1.3.5-1ubuntu2 package management framework for Ruby librar $ ruby -v ruby 1.9.1p243 (2009-07-16 revision 24175) [x86_64-linux] $ gem list *** LOCAL GEMS *** rack (1.1.0) sinatra (1.0)

    Read the article

  • WPF Choppy Animation

    - by Chris Dunaway
    WPF Windows-XP SP3 I'm having a problem with a simple WPF animation. I use the following Xaml code (in XamlPad and also in a WPF project): <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:sys="clr-namespace:System;assembly=mscorlib" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" > <Border Name="MyBorder" BorderThickness="10" BorderBrush="Blue" CornerRadius="10" Background="DarkRed" > <Rectangle Name="MyRectangle" Margin="10" StrokeDashArray="2.0,1.0" StrokeThickness="10" RadiusX="10" RadiusY="10" Stroke="Black" StrokeDashOffset="0"> <Rectangle.Triggers> <EventTrigger RoutedEvent="Rectangle.Loaded"> <BeginStoryboard> <Storyboard> <DoubleAnimation Storyboard.TargetName="MyRectangle" Storyboard.TargetProperty="StrokeDashOffset" From="0.0" To="3.0" Duration="0:0:1" RepeatBehavior="Forever" Timeline.DesiredFrameRate="30" /> </Storyboard> </BeginStoryboard> </EventTrigger> </Rectangle.Triggers> </Rectangle> </Border> </Page> It has the effect of causing the border to animate around the rectangle. After a fresh reboot of the machine, this animation is nice and smooth. However, I tend to leave my machine on all the time and after a period time elapses (I don't know how long), the animation starts stuttering and becomes choppy. I thought that it may be memory or resource issues, but after shutting down all other apps and any services that seem unnecessary, the stuttering still continues. However, after a system reboot, the animation is smooth again! I get the same symptoms in a WPF app or in XamlPad. In the case of the app, it doesn't seem to make any difference whether I run in the debugger or if I run the executable directly. I applied the patch at this link: http://support.microsoft.com/kb/981741 and I thought that it had taken care of the issue, but it seems not to have. I have seen some posts that might indicate that using transparency might affect animation, but as you can see, my xaml does not use transparency. Can anyone give me some suggestions on how to determine what the problem is? Are there any WPF diagnostic tools that might help? UPDATE: I have checked my video drivers and they are the latest version. (nVidia GeForce 8400 GS)

    Read the article

  • Winforms connection strings from App.config

    - by Geo Ego
    I have a Winforms app that I am developing in C# that will serve as a frontend for a SQL Server 2005 database. I rolled the executable out to a test machine and ran it. It worked perfectly fine on the test machine up until the last round of changes that I made. However, now on the test machine, it throws the following exception immediately upon opening: System.NullReferenceException: Object reference not set to an instance of an object. at PSRD_Specs_Database_Administrat.mainMenu.mainMenu_Load(Object sender, EventArgs e) at System.Windows.Forms.Form.OnLoad(EventArgs e) at System.Windows.Forms.Form.OnCreateControl() at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl() at System.Windows.Forms.Control.WmShowWindow(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ContainerControl.WndProc(Message& m) at System.Windows.Forms.Form.WmShowWindow(Message& m) at System.Windows.Forms.Form.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) The only thing that I changed in this version that pertains to mainMenu_Load is the way that the connection string to the database is called. Previously, I had set a string with the connection string on every form that I needed to call it from, like: string conString = "Data Source = SHAREPOINT;Trusted_Connection = yes;" + "database = CustomerDatabase;connection timeout = 15"; As my app grew and I added forms to it, I decided to add an App.config to the project. I defined the connection string in it: <connectionStrings> <add name="conString" providerName="System.Data.SqlClient" connectionString="Data Source = SHAREPOINT;Trusted_Connection = yes;database = CustomerDatabase;connection timeout = 15" /> </connectionStrings> I then created a static string that would return the conString: public static string GetConnectionString(string conName) { string strReturn = string.Empty; if (!(string.IsNullOrEmpty(conName))) { strReturn = ConfigurationManager.ConnectionStrings[conName].ConnectionString; } else { strReturn = ConfigurationManager.ConnectionStrings["conString"].ConnectionString; } return strReturn; } I removed the conString variable and now call the connection string like so: PublicMethods.GetConnectionString("conString").ToString() It appears that this is giving me the error. I changed these instances to directly call the connection string from App.config without using GetConnectionString. For instance, in a SQLConnection: using (SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["conString"].ConnectionString)) This also threw the exception. However, when I went back to using the conString variable on each form, I had no issues. What I don't understand is why all three methods work fine on my development machine, while using the App.config directly or via the static string I created throw exceptions.

    Read the article

  • Visual Studio Express 2012 debug mode doesn't work

    - by user2350086
    I have a project in Visual Studio that I have been working on for a while, and I have used the debugger extensively. Recently I changed some settings and I have lost the ability to stop the program and step through code. I can't figure out what I had changed that might have affected this. If I put a breakpoint in my code and try to have the program stop there, it doesn't. The break point shows up white with a red outline. If I hover the mouse over it, it says "The breakpoint will not currently be hit. No executable code of the debugger's target code type is associated with this line. Possible causes include: conditional compilation, compiler optimizations, or the target architecture of this line is not supported by the current debugger code type." I know for a fact that the program executes the code where the breakpoint is because I put the breakpoint in the beginning of the InitializeComponent method. The program displays the window fine, but does not stop at the breakpoint. Yes, I am running in debug mode. It seems as though there is a disconnect between the compiled code and the source code displayed. Does anyone know what that would be, or know which compiler settings I should check to re-enable debugging? Here are the compiler options: /GS /analyze- /W3 /Zc:wchar_t /I"D:\dev\libcurl-7.19.3-win32-ssl-msvc\include" /Zi /Od /sdl /Fd"Debug\vc110.pdb" /fp:precise /D "WIN32" /D "_DEBUG" /D "_UNICODE" /D "UNICODE" /errorReport:prompt /WX- /Zc:forScope /Oy- /clr /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\mscorlib.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Data.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Drawing.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.DataVisualization.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Windows.Forms.dll" /FU"C:\Program Files (x86)\Reference Assemblies\Microsoft\Framework.NETFramework\v4.5\System.Xml.dll" /MDd /Fa"Debug\" /EHa /nologo /Fo"Debug\" /Fp"Debug\Prog.pch" The linker options are: /OUT:"D:\dev\Prog\Debug\Prog.exe" /MANIFEST /NXCOMPAT /PDB:"D:\dev\Prog\Debug\Prog.pdb" /DYNAMICBASE "curllib.lib" "winmm.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" /FIXED:NO /DEBUG /MACHINE:X86 /ENTRY:"Main" /INCREMENTAL /PGD:"D:\dev\Prog\Debug\Prog.pgd" /SUBSYSTEM:WINDOWS /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /ManifestFile:"Debug\Prog.exe.intermediate.manifest" /ERRORREPORT:PROMPT /NOLOGO /LIBPATH:"D:\dev\libcurl-7.19.3-win32-ssl-msvc\lib\Debug" /ASSEMBLYDEBUG /TLBID:1

    Read the article

  • Having trouble getting MEF imports to be resolved

    - by Dave
    This is sort of a continuation of one of my earlier posts, which involves the resolving of modules in my WPF application. This question is specifically related to the effect of interdependencies of modules and the method of constructing those modules (i.e. via MEF or through new) on MEF's ability to resolve relationships. First of all, here is a simple UML diagram of my test application: I have tried two approaches: left approach: the App implements IError right approach: the App has a member that implements IError Left approach My code behind looked like this (just the MEF-related stuff): // app.cs [Export(typeof(IError))] public partial class Window1 : Window, IError { [Import] public CandyCo.Shared.LibraryInterfaces.IPlugin Plugin { get; set; } [Export] public CandyCo.Shared.LibraryInterfaces.ICandySettings Settings { get; set; } private ICandySettings Settings; public Window1() { // I create the preferences here with new, instead of using MEF. I wonder // if that's my whole problem? If I use MEF, and want to have parameters // going to the constructor, then do I have to [Export] a POCO (i.e. string)? Settings = new CandySettings( "Settings", @"c:\settings.xml"); var catalog = new DirectoryCatalog( "."); var container = new CompositionContainer( catalog); try { container.ComposeParts( this); } catch( CompositionException ex) { foreach( CompositionError e in ex.Errors) { string description = e.Description; string details = e.Exception.Message; } throw; } } } // plugin.cs [Export(typeof(IPlugin))] public class Plugin : IPlugin { [Import] public CandyCo.Shared.LibraryInterfaces.ICandySettings CandySettings { get; set; } [Import] public CandyCo.Shared.LibraryInterfaces.IError ErrorInterface { get; set; } [ImportingConstructor] public Plugin( ICandySettings candy_settings, IError error_interface) { CandySettings = candy_settings; ErrorInterface = error_interface; } } // candysettings.cs [Export(typeof(ICandySettings))] public class CandySettings : ICandySettings { ... } Right-side approach Basically the same as the left-side approach, except that I created a class that inherits from IError in the same assembly as Window1. I then used an [Import] to try to get MEF to resolve that for me. Can anyone explain how the two ways I have approached MEF here are flawed? I have been in the dark for so long that instead of reading about MEF and trying different suggestions, I've added MEF to my solution and am stepping into the code. The part where it looks like it fails is when it calls partManager.GetSavedImport(). For some reason, the importCache is null, which I don't understand. All the way up to this point, it's been looking at the part (Window1) and trying to resolve two imported interfaces -- IError and IPlugin. I would have expected it to enter code that looks at other assemblies in the same executable folder, and then check it for exports so that it knows how to resolve the imports...

    Read the article

  • How to add objects to association in OnPreInsert, OnPreUpdate

    - by Dmitriy Nagirnyak
    Hi, I have an event listener (for Audit Logs) which needs to append audit log entries to the association of the object: public Company : IAuditable { // Other stuff removed for bravety IAuditLog IAuditable.CreateEntry() { var entry = new CompanyAudit(); this.auditLogs.Add(entry); return entry; } public virtual IEnumerable<CompanyAudit> AuditLogs { get { return this.auditLogs } } } The AuditLogs collection is mapped with cascading: public class CompanyMap : ClassMap<Company> { public CompanyMap() { // Id and others removed fro bravety HasMany(x => x.AuditLogs).AsSet() .LazyLoad() .Access.ReadOnlyPropertyThroughCamelCaseField() .Cascade.All(); } } And the listener just asks the auditable object to create log entries so it can update them: internal class AuditEventListener : IPreInsertEventListener, IPreUpdateEventListener { public bool OnPreUpdate(PreUpdateEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } public bool OnPreInsert(PreInsertEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } private static void LogProperty(IAuditable auditable) { var entry = auditable.CreateAuditEntry(); entry.CreatedAt = DateTime.Now; entry.Who = GetCurrentUser(); // Might potentially execute a query. // Also other information is set for entry here } } The problem with it though is that it throws TransientObjectException when commiting the transaction: NHibernate.TransientObjectException : object references an unsaved transient instance - save the transient instance before flushing. Type: PropConnect.Model.UserAuditLog, Entity: PropConnect.Model.UserAuditLog at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session) at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session) at NHibernate.Type.ManyToOneType.NullSafeSet(IDbCommand st, Object value, Int32 index, Boolean[] settable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.WriteElement(IDbCommand st, Object elt, Int32 i, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.PerformInsert(Object ownerId, IPersistentCollection collection, IExpectation expectation, Object entry, Int32 index, Boolean useBatch, Boolean callable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.Recreate(IPersistentCollection collection, Object id, ISessionImplementor session) at NHibernate.Action.CollectionRecreateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() As the cascading is set to All I expected NH to handle this. I also tried to modify the collection using state but pretty much the same happens. So the question is what is the last chance to modify object's associations before it gets saved? Thanks, Dmitriy.

    Read the article

  • Any tool(s) for knowing the layout (segments) of running process in Windows?

    - by claws
    I've always been curious about How exactly the process looks in memory? What are the different segments(parts) in it? How exactly will be the program (on the disk) & process (in the memory) are related? My previous question: http://stackoverflow.com/questions/1966920/more-info-on-memory-layout-of-an-executable-program-process In my quest, I finally found a answer. I found this excellent article that cleared most of my queries: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html In the above article, author shows how to get different segments of the process (LINUX) & he compares it with its corresponding ELF file. I'm quoting this section here: Courious to see the real layout of process segment? We can use /proc//maps file to reveal it. is the PID of the process we want to observe. Before we move on, we have a small problem here. Our test program runs so fast that it ends before we can even dump the related /proc entry. I use gdb to solve this. You can use another trick such as inserting sleep() before it calls return(). In a console (or a terminal emulator such as xterm) do: $ gdb test (gdb) b main Breakpoint 1 at 0x8048376 (gdb) r Breakpoint 1, 0x08048376 in main () Hold right here, open another console and find out the PID of program "test". If you want the quick way, type: $ cat /proc/`pgrep test`/maps You will see an output like below (you might get different output): [1] 0039d000-003b2000 r-xp 00000000 16:41 1080084 /lib/ld-2.3.3.so [2] 003b2000-003b3000 r--p 00014000 16:41 1080084 /lib/ld-2.3.3.so [3] 003b3000-003b4000 rw-p 00015000 16:41 1080084 /lib/ld-2.3.3.so [4] 003b6000-004cb000 r-xp 00000000 16:41 1080085 /lib/tls/libc-2.3.3.so [5] 004cb000-004cd000 r--p 00115000 16:41 1080085 /lib/tls/libc-2.3.3.so [6] 004cd000-004cf000 rw-p 00117000 16:41 1080085 /lib/tls/libc-2.3.3.so [7] 004cf000-004d1000 rw-p 004cf000 00:00 0 [8] 08048000-08049000 r-xp 00000000 16:06 66970 /tmp/test [9] 08049000-0804a000 rw-p 00000000 16:06 66970 /tmp/test [10] b7fec000-b7fed000 rw-p b7fec000 00:00 0 [11] bffeb000-c0000000 rw-p bffeb000 00:00 0 [12] ffffe000-fffff000 ---p 00000000 00:00 0 Note: I add number on each line as reference. Back to gdb, type: (gdb) q So, in total, we see 12 segment (also known as Virtual Memory Area--VMA). But I want to know about Windows Process & PE file format. Any tool(s) for getting the layout (segments) of running process in Windows? Any other good resources for learning more on this subject? EDIT: Are there any good articles which shows the mapping between PE file sections & VA segments?

    Read the article

  • More GCC link time issues: undefined reference to main

    - by vikramtheone
    Hi Guys, I'm writing software for a Cortex-A8 processor and I have to write some ARM assembly code to access specific registers. I'm making use of the gnu compilers and related tool chains, these tools are installed on the processor board(Freescale i.MX515) with Ubuntu. I make a connection to it from my host PC(Windows) using WinSCP and the PuTTY terminal. As usual I started with a simple C project having main.c and functions.s. I compile the main.c using GCC, assemble the functions.s using as and link the generated object files using once again GCC, but I get strange errors during this process. An important finding - Meanwhile, I found out that my assembly code may have some issues because when I individually assemble it using the command as -o functions.o functions.s and try running the generated functions.o using ./functions.o command, the bash shell is failing to recognize this file as an executable(on pressing tab functions.o is not getting selected/PuTTY is not highlighting the file). Can anyone suggest whats happening here? Are there any specific options I have to send, to GCC during the linking process? The errors I see are strange and beyond my understanding, I don't understand to what the GCC is referring. I'm pasting here the contents of main.c, functions.s, the Makefile and the list of errors. Help, please!!! Vikram main.c #include <stdio.h> #include <stdlib.h> int main(void) { puts("!!!Hello World!!!"); /* prints !!!Hello World!!! */ return EXIT_SUCCESS; } functions.s * Main program */ .equ STACK_TOP, 0x20000800 .text .global _start .syntax unified _start: .word STACK_TOP, start .type start, function start: movs r0, #10 movs r1, #0 .end Makefile all: hello hello: main.o functions.o gcc -o main.o functions.o main.o: main.c gcc -c -mcpu=cortex-a8 main.c functions.o: functions.s as -mcpu=cortex-a8 -o functions.o functions.s Errors ubuntu@ubuntu-desktop:~/Documents/Project/Others/helloworld$ make gcc -c -mcpu=cortex-a8 main.c as -mcpu=cortex-a8 -o functions.o functions.s gcc -o main.o functions.o functions.o: In function `_start': (.text+0x0): multiple definition of `_start' /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o:init.c:(.text+0x0): first defined here /usr/lib/gcc/arm-linux-gnueabi/4.3.3/../../../crt1.o: In function `_start': init.c:(.text+0x30): undefined reference to `main' collect2: ld returned 1 exit status make: *** [hello] Error 1

    Read the article

  • How do I prevent programmatically the "Program Compatibility Assistant" in Vista (and Windows 7) fro

    - by Asaf
    I develop a C++ program which might use adobe flash, although it is not essential. I use CoCreateInstance to create the flash object, and if it fails, I know flash is not installed so I don't use it. However, in Vista (and I think Windows 7 as well), when flash is not installed, after leaving the application, the "Program Compatibility Assistant" pops up a message saying that "This program requires a missing Windows component" specifying the flash.ocx. Is there a way to prevent this message from appearing? I don't want to force any user to install flash (especially since it's the IE ActiveX, and FireFox users might not have it installed), and my application can operate well without the flash. Plus this message is really annoying when it appears after every run. I don't mean of course disabling the PCA on the user's machine, but programmatically disable this specific appearance on all machines. Any thoughts? Thanks [EDIT:] I followed Shay's lead (thanks), and did some more digging of my own. I added the following XML to the application's manifest: <trustInfo xmlns="urn:schemas-microsoft-com:asm.v2"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false"> </requestedExecutionLevel> </requestedPrivileges> </security> </trustInfo> (see also: msdn.microsoft.com/en-us/library/bb756929.aspx) This solved the problem on Vista 64. To solve the same problem on Windows 7, I added the following: <compatibility xmlns="urn:schemas-microsoft-com:compatibility.v1"> <application> <!--The ID below indicates application support for Windows Vista --> <supportedOS Id="{e2011457-1546-43c5-a5fe-008deee3d3f0}"/> <!--The ID below indicates application support for Windows 7 --> <supportedOS Id="{35138b9a-5d96-4fbd-8e2d-a2440225f93a}"/> </application> </compatibility> (See also: blogs.msdn.com/yvesdolc/archive/2009/09/22/the-new-compatibility-section-in-the-application-manifest.aspx) Solved Windows 7. But for some reason, it still happens in Vista 32... I also tried editing the manifest of the specific DLL which causes the problem, but it had no effect. Only the executable's manifest itself affected the problem. So... Vista 32?

    Read the article

  • Chaining multiple ShellExecute calls

    - by IVlad
    Consider the following code and its executable - runner.exe: #include <iostream> #include <string> #include <windows.h> using namespace std; int main(int argc, char *argv[]) { SHELLEXECUTEINFO shExecInfo; shExecInfo.cbSize = sizeof(SHELLEXECUTEINFO); shExecInfo.fMask = NULL; shExecInfo.hwnd = NULL; shExecInfo.lpVerb = "open"; shExecInfo.lpFile = argv[1]; string Params = ""; for ( int i = 2; i < argc; ++i ) Params += argv[i] + ' '; shExecInfo.lpParameters = Params.c_str(); shExecInfo.lpDirectory = NULL; shExecInfo.nShow = SW_SHOWNORMAL; shExecInfo.hInstApp = NULL; ShellExecuteEx(&shExecInfo); return 0; } These two batch files both do what they're supposed to, which is run notepad.exe and run notepad.exe and tell it to try to open test.txt: 1. runner.exe notepad.exe 2. runner.exe notepad.exe test.txt Now, consider this batch file: 3. runner.exe runner.exe notepad.exe This one should run runner.exe and send notepad.exe as one of its command line arguments, shouldn't it? Then, that second instance of runner.exe should run notepad.exe - which doesn't happen, I get a "Windows cannot find 'am'. Make sure you typed the name correctly, and then try again" error. If I print the argc argument, it's 14 for the second instance of runner.exe, and they are all weird stuff like Files\Microsoft, SQL, Files\Common and so on. I can't figure out why this happens. I want to be able to string as many runner.exe calls using command line arguments as possible, or at least 2. How can I do that? I am using Windows 7 if that makes a difference.

    Read the article

  • Is this scenario in compliance with GPLv3?

    - by Sean Kinsey
    For arguments sake, say that we create a web application , that depends on a GPLv3 licensed component, lets say Ext JS. Based on Section 0 of the license, the common notion is that the entire web application (the client side javascript) falls under the definition of a covered work: A “covered work” means either the unmodified Program or a work based on the Program. and that it will therefor have to be distributed under the same license Ok, so here comes the fun part: This is a short 'program' that is based on Ext JS var myPanel = new Ext.Panel(); The question that arises is: Have I now violated the GPL by not including the source of Ext JS and its license? Ok, so lets take another example <!doctype html> <html> <head> <title>my title</title> <script type="text/javascript" src="http://extjs.cachefly.net/ext-3.2.1/ext-all.js"> </script> <link rel="stylesheet" type="text/css" href="http://extjs.cachefly.net/ext-3.2.1/resources/css/ext-all.css" /> <script type="text/javascript"> var myPanel = new Ext.Panel(); </script> </head> <body> </body> </html> Have I now violated the terms of the GPL? The code conveyed by me to you is in a non-functional state - it will have to be combined with the actual source of Ext JS, which you(your browser) will have to retrieve, from a source made public by someone else to be usable. Now, if the answer to the above is no, how does me conveying this code in visible form differ from the 'invisible' form conveyed by my web server? As a side note, a very similar thing is done in Linux with many projects that depends on less permissive licenses - the user has to retrieve these on its own and make these available for the primary lib/executable. How is this not the same if the user is informed on beforehand that he (the browser) will have to retrieve the needed resources from a different source? Just to make it clear, I'm pro FLOSS, and I have also published a number of projects licensed under more permissive licenses. The reason I'm asking this is that I still haven't found anyone offering a definitive answer to this.

    Read the article

  • multiple definition of inline function

    - by K71993
    Hi, I have gone through some posts related to this topic but was not able to sort out my doubt completly. This might be a very navie question. Code Description I have a header file "inline.h" and two translation unit "main.cpp" and "tran.cpp". Details of code are as below inline.h file details #ifndef __HEADER__ #include <stdio.h> extern inline int func1(void) { return 5; } static inline int func2(void) { return 6; } inline int func3(void) { return 7; } #endif main.c file details are below #define <stdio.h> #include <inline.h> int main(int argc, char *argv[]) { printf("%d\n",func1()); printf("%d\n",func2()); printf("%d\n",func3()); return 0; } tran.cpp file details (Not that the functions are not inline here) #include <stdio.h> int func1(void) { return 500; } int func2(void) { return 600; } int func3(void) { return 700; } Question The above code does not compile in gcc compiler whereas compiles in g++ (Assuming you make changes related to gcc in code like changing the code to .c not using any C++ header files... etc). The error displayed is "duplicate definition of inline function - func3". Can you clarify why this difference is present across compile? When you run the program (g++ compiled) by creating two seperate compilation unit (main.o and tran.o and create an executable a.out), the output obtained is 500 6 700 Why does the compiler pick up the definition of the function which is not inline. Actually since #include is used to "add" the inline definiton I had expected 5,6,7 as the output. My understanding was during compilation since the inline definition is found, the function call would be "replaced" by inline function definition. Can you please tell me in detailed steps the process of compilation and linking which would lead us to 500,6,700 output. I can only understand the output 6. Thanks in advance for valuable input.

    Read the article

  • Using WSH (VBS) with iMacros - how do they do it?

    - by Carl
    (iMacros For Firefox 6.6.5.0; Firefox 3.6.3; Windows XP Pro SP3 w/all updates) I made an iMacro to select "load next 25" (comments) on a web page (CNN.COM). Unfortunately, iMacros doesn't appear to do looping (do the above until that string doesn't appear on the page anymore - i.e. all the comments are loaded). I tried putting {!iloop} in the TAG command, and it didn't work - then I read it wouldn't. So I tried the example at http://wiki.imacros.net/Loop_after_Query_or_Login I can't find any information on how to actually run the script in the above example. I searched Google and found VBS scripting is handled with .wsh files with Windows XP Pro. (The examples and other references there say Windows does VBS natively, so I looked up how with Google.) So I made the following .wsh file (modified the above example): Option Explicit Dim iim1, iret 'initialize iMacros instance set iim1 = CreateObject ("imacros") iret = iim1.iimInit() do while not iret < 0 iret = iim1.iimPlay("Load All CNN Comments") loop ' tell user we're done msgbox "End." ' exit iMacros instance and quit script iret = iim1.iimExit() Wscript.Quit() Here's the iMacro: (Load All CNN Comments.iim) VERSION BUILD=6650406 RECORDER=FX TAG POS=1 TYPE=A ATTR=TXT:Load<SP>next<SP>25 WAIT SECONDS=#DOWNLOADCOMPLETE# The iMacro works by itself - I press Play (left iMacro panel) and the next 25 comments load on the CNN.com page in the current tab. I put the .wsh file in the ...\iMacros\Macros directory - with the iMacro "Load All CNN Comments.iim" When I run the .wsh file (by just double clicking on it's icon - I created it with Notepad, and Windows gave it an icon for that file type - it's executable) I get the message from "Windows Script Host" - "There is no script file specified." I wasn't actually expecting it to work, as I don't see how Windows would know to call iMacros to run the iim macro. It would be nice if there was a simple, COMPLETE, example of how to use a VBS script with iMacros, that isn't bogged down with unnecessary complication like filling in a form, loading multiple pages, etc. I can't find ANY example. So what do I need to do to get this to work? I just installed iMacros yesterday, because I am constantly having the problem that there are hundred of comments after a CNN.com article, and loading 25 more at a time until they are all on the page makes it impractical to read any replies to my comments. It would also be nice if I could run the Macro from Firefox, rather than by double clicking on some file somewhere. Thanks for any help.

    Read the article

  • How best to store Subversion version information in EAR's?

    - by Rene
    When receiving a bug report or an it-doesnt-work message one of my initials questions is always what version? With a different builds being at many stages of testing, planning and deploying this is often a non-trivial question. I the case of releasing Java JAR (ear, jar, rar, war) files I would like to be able to look in/at the JAR and switch to the same branch, version or tag that was the source of the released JAR. How can I best adjust the ant build process so that the version information in the svn checkout remains in the created build? I was thinking along the lines of: adding a VERSION file, but with what content? storing information in the META-INF file, but under what property with which content? copying sources into the result archive added svn:properties to all sources with keywords in places the compiler leaves them be I ended up using the svnversion approach (the accepted anwser), because it scans the entire subtree as opposed to svn info which just looks at the current file / directory. For this I defined the SVN task in the ant file to make it more portable. <taskdef name="svn" classname="org.tigris.subversion.svnant.SvnTask"> <classpath> <pathelement location="${dir.lib}/ant/svnant.jar"/> <pathelement location="${dir.lib}/ant/svnClientAdapter.jar"/> <pathelement location="${dir.lib}/ant/svnkit.jar"/> <pathelement location="${dir.lib}/ant/svnjavahl.jar"/> </classpath> </taskdef> Not all builds result in webservices. The ear file before deployment must remain the same name because of updating in the application server. Making the file executable is still an option, but until then I just include a version information file. <target name="version"> <svn><wcVersion path="${dir.source}"/></svn> <echo file="${dir.build}/VERSION">${revision.range}</echo> </target> Refs: svnrevision: http://svnbook.red-bean.com/en/1.1/re57.html svn info http://svnbook.red-bean.com/en/1.1/re13.html subclipse svn task: http://subclipse.tigris.org/svnant/svn.html svn client: http://svnkit.com/

    Read the article

  • Programming graphics and sound on PC - Total newbie questions, and lots of them!

    - by Russel
    Hello, This isn't exactly specifically a programming question (or is it?) but I was wondering: How are graphics and sound processed from code and output by the PC? My guess for graphics: There is some reserved memory space somewhere that holds exactly enough room for a frame of graphics output for your monitor. IE: 800 x 600, 24 bit color mode == 800x600x3 = ~1.4MB memory space Between each refresh, the program writes video data to this space. This action is completed before the monitor refresh. Assume a simple 2D game: the graphics data is stored in machine code as many bytes representing color values. Depending on what the program(s) being run instruct the PC, the processor reads the appropriate data and writes it to the memory space. When it is time for the monitor to refresh, it reads from each memory space byte-for-byte and activates hardware depending on those values for each color element of each pixel. All of this of course happens crazy-fast, and repeats x times a second, x being the monitor's refresh rate. I've simplified my own likely-incorrect explanation by avoiding talk of double buffering, etc Here are my questions: a) How close is the above guess (the three steps)? b) How could one incorporate graphics in pure C++ code? I assume the practical thing that everyone does is use a graphics library (SDL, OpenGL, etc), but, for example, how do these libraries accomplish what they do? Would manual inclusion of graphics in pure C++ code (say, a 2D spite) involve creating a two-dimensional array of bit values (or three dimensional to include multiple RGB values per pixel)? Is this how it would be done waaay back in the day? c) Also, continuing from above, do libraries such as SDL etc that use bitmaps actual just build the bitmap/etc files into machine code of the executable and use them as though they were build in the same matter mentioned in question b above? d) In my hypothetical step 3 above, is there any registers involved? Like, could you write some byte value to some register to output a single color of one byte on the screen? Or is it purely dedicated memory space (=RAM) + hardware interaction? e) Finally, how is all of this done for sound? (I have no idea :) )

    Read the article

  • Django - I got TemplateSyntaxError when I try open the admin page. (http://DOMAIN_NAME/admin)

    - by user140827
    I use grappelly plugin. When I try open the admin page (/admin) I got TemplateSyntaxError. It says 'get_generic_relation_list' is invalid block tag. TemplateSyntaxError at /admin/ Invalid block tag: 'get_generic_relation_list', expected 'endblock' Request Method: GET Request URL: http://DOMAIN_NAME/admin/ Django Version: 1.4 Exception Type: TemplateSyntaxError Exception Value: Invalid block tag: 'get_generic_relation_list', expected 'endblock' Exception Location: /opt/python27/django/1.4/lib/python2.7/site-packages/django/template/base.py in invalid_block_tag, line 320 Python Executable: /opt/python27/django/1.4/bin/python Python Version: 2.7.0 Python Path: ['/home/vhosts/DOMAIN_NAME/httpdocs/media', '/home/vhosts/DOMAIN_NAME/private/new_malinnikov/lib', '/home/vhosts/DOMAIN_NAME/httpdocs/', '/home/vhosts/DOMAIN_NAME/private/new_malinnikov', '/home/vhosts/DOMAIN_NAME/private/new_malinnikov', '/home/vhosts/DOMAIN_NAME/private', '/opt/python27/django/1.4', '/home/vhosts/DOMAIN_NAME/httpdocs', '/opt/python27/django/1.4/lib/python2.7/site-packages/setuptools-0.6c12dev_r88846-py2.7.egg', '/opt/python27/django/1.4/lib/python2.7/site-packages/pip-0.8.1-py2.7.egg', '/opt/python27/django/1.4/lib/python27.zip', '/opt/python27/django/1.4/lib/python2.7', '/opt/python27/django/1.4/lib/python2.7/plat-linux2', '/opt/python27/django/1.4/lib/python2.7/lib-tk', '/opt/python27/django/1.4/lib/python2.7/lib-old', '/opt/python27/django/1.4/lib/python2.7/lib-dynload', '/opt/python27/lib/python2.7', '/opt/python27/lib/python2.7/plat-linux2', '/opt/python27/lib/python2.7/lib-tk', '/opt/python27/django/1.4/lib/python2.7/site-packages', '/opt/python27/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/flup-1.0.3.dev_20100525-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/virtualenv-1.5.1-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/SQLAlchemy-0.6.4-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/SQLObject-0.14.1-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/FormEncode-1.2.3dev-py2.7.egg', '/opt/python27/lib/python2.7/site-packages/MySQL_python-1.2.3-py2.7-linux-x86_64.egg', '/opt/python27/lib/python2.7/site-packages/psycopg2-2.2.2-py2.7-linux-x86_64.egg', '/opt/python27/lib/python2.7/site-packages/pysqlite-2.6.0-py2.7-linux-x86_64.egg', '/opt/python27/lib/python2.7/site-packages', '/opt/python27/lib/python2.7/site-packages/PIL'] Server time: ???, 7 ??? 2012 04:19:42 +0700 Error during template rendering In template /home/vhosts/DOMAIN_NAME/httpdocs/templates/grappelli/admin/base.html, error at line 28 Invalid block tag: 'get_generic_relation_list', expected 'endblock' 18 <!--[if lt IE 8]> 19 <script src="http://ie7-js.googlecode.com/svn/version/2.0(beta3)/IE8.js" type="text/javascript"></script> 20 <![endif]--> 21 {% block javascripts %} 22 <script type="text/javascript" src="{% admin_media_prefix %}jquery/jquery-1.3.2.min.js"></script> 23 <script type="text/javascript" src="{% admin_media_prefix %}js/admin/Bookmarks.js"></script> 24 <script type="text/javascript"> 25 // Admin URL 26 var ADMIN_URL = "{% get_admin_url %}"; 27 // Generic Relations 28 {% get_generic_relation_list %} 29 // Get Bookmarks 30 $(document).ready(function(){ 31 $.ajax({ 32 type: "GET", 33 url: '{% url grp_bookmark_get %}', 34 data: "path=" + escape(window.location.pathname + window.location.search), 35 dataType: "html", 36 success: function(data){ 37 $('ul#bookmarks').replaceWith(data); 38 }

    Read the article

  • XSLT Stylesheet works with a vbs script, but not Perl

    - by user2472274
    I have a program that is essentially a search application, and it exists in both VBScript and Perl form (I'm trying to make an executable with a GUI). Currently the search application outputs an HTML file, and if a section of text in the HTML is longer than twelve lines then it hides anything after that and includes a clickable More... tag. This is done in XSLT and works with VBScript. I literally copied and pasted the stylesheet into the Perl program that I'm using and it does everything right except for the More... tag. Is there any reason why it would be working with the VBScript but not Perl? I'm using XML::LibXSLT in the Perl script, and here is the template that is supposed to be creating the More... tag <xsl:template name="more"> <xsl:param name="text"/> <xsl:param name="row-id"/> <xsl:param name="cycle" select="1"/> <xsl:choose> <xsl:when test="($cycle &gt; 12) and contains($text,'&#13;')"> <span class="show" onclick="showID('SHOW{$row-id}');style.display = 'none';">More...</span> <span class="hidden" id="SHOW{$row-id}"> <xsl:call-template name="highlight"> <xsl:with-param name="text" select="$text"/> </xsl:call-template> </span> </xsl:when> <xsl:when test="contains($text,'&#13;')"> <xsl:call-template name="highlight"> <xsl:with-param name="text" select="substring-before($text,'&#13;')"/> </xsl:call-template> <xsl:text>&#13;</xsl:text> <xsl:call-template name="more"> <xsl:with-param name="text" select="substring-after($text,'&#13;')"/> <xsl:with-param name="row-id" select="$row-id"/> <xsl:with-param name="cycle" select="$cycle + 1"/> </xsl:call-template> </xsl:when> <xsl:otherwise> <xsl:call-template name="highlight"> <xsl:with-param name="text" select="$text"/> </xsl:call-template> </xsl:otherwise> </xsl:choose> </xsl:template>

    Read the article

  • git filter-branch chmod

    - by Evan Purkhiser
    I accidental had my umask set incorrectly for the past few months and somehow didn't notice. One of my git repositories has many files marked as executable that should be just 644. This repo has one main master branch, and about 4 private feature branches (that I keep rebased on top of the master). I've corrected the files in my master branch by running find -type f -exec chmod 644 {} \; and committing the changes. I then rebased my feature branches onto master. The problem is there are newly created files in the feature branches that are only in that branch, so they weren't corrected by my massive chmod commit. I didn't want to create a new commit for each feature branch that does the same thing as the commit I made on master. So I decided it would be best to go back through to each commit where a file was made and set the permissions. This is what I tried: git filter-branch -f --tree-filter 'chmod 644 `git show --diff-filter=ACR --pretty="format:" --name-only $GIT_COMMIT`; git add .' master.. It looked like this worked, but upon further inspection I noticed that the every commit after a commit containing a new file with the proper permissions of 644 would actually revert the change with something like: diff --git a b old mode 100644 new mode 100755 I can't for the life of me figure out why this is happening. I think I must be mis-understanding how git filter-branch works. My Solution I've managed to fix my problem using this command: git filter-branch -f --tree-filter 'FILES="$FILES "`git show --diff-filter=ACMR --pretty="format:" --name-only $GIT_COMMIT`; chmod 644 $FILES; true' development.. I keep adding onto the FILES variable to ensure that in each commit any file created at some point has the proper mode. However, I'm still not sure I really understand why git tracks the file mode for each commit. I had though that since I had fixed the mode of the file when it was first created that it would stay that mode unless one of my other commits explicit changed it to something else. That did not appear to the be the case. The reason I thought that this would work is from my understanding of rebase. If I go back to HEAD~5 and change a line of code, that change is propagated through, it doesn't just get changed back in HEAD~4.

    Read the article

  • File sizing issue in DOS/FAT

    - by Heather
    I've been tasked with writing a data collection program for a Unitech HT630, which runs a proprietary DOS operating system that can run executables compiled for 16-bit MS DOS with some restrictions. I'm using the Digital Mars C/C++ compiler, which is working well thus far. One of the application requirements is that the data file must be human-readable plain text, meaning the file can be imported into Excel or opened by Notepad. I'm using a variable length record format much like CSV that I've successfully implemented using the C standard library file I/O functions. When saving a record, I have to calculate whether the updated record is larger or smaller than the version of the record currently in the data file. If larger, I first shift all records immediately after the current record forward by the size difference calculated before saving the updated record. EOF is extended automatically by the OS to accommodate the extra data. If smaller, I shift all records backwards by my calculated offset. This is working well, however I have found no way to modify the EOF marker or file size to ignore the data after the end of the last record. Most of the time records will grow in size because the data collection program will be filling some of the empty fields with data when saving a record. Records will only shrink in size when a correction is made on an existing entry, or on a normal record save if the descriptive data in the record is longer than what the program reads in memory. In the situation of a shrinking record, after the last record in the file I'm left with whatever data was sitting there before the shift. I have been writing an EOF delimiter into the file after a "shrinking record save" to signal where the end of my records are and space-filling the remaining data, but then I no longer have a clean file until a "growing record save" extends the size of the file over the space-filled area. The truncate() function in unistd.h does not work (I'm now thinking this is for *nix flavors only?). One proposed solution I've seen involves creating a second file and writing all the data you wish to save into that file, and then deleting the original. Since I only have 4MB worth of disk space to use, this works if the file size is less than 2MB minus the size of my program executable and configuration files, but would fail otherwise. It is very likely that when this goes into production, users would end up with a file exceeding 2MB in size. I've looked at Ralph Brown's Interrupt List and the interrupt reference in IBM PC Assembly Language and Programming and I can't seem to find anything to update the file size or similar. Is reducing a file's size without creating a second file even possible in DOS?

    Read the article

  • Best of both worlds: browser and desktop game?

    - by Ricket
    When considering a platform for a game, I've decided on multi-platform (Win/Lin/Mac) but can't make up my mind as far as browser vs. desktop. As I'm not all too far in development, and now having second thoughts, I'd like your opinion! Browser-based games using Java applets: market penetration is reasonably high (for version 6, it's somewhere around 60% I believe?) using JOGL, 3D performance/quality is decent; certainly good enough to render the crappy 3D graphics that I make there's the (small?) possibility of porting something to Android great for an audience of gamers who switch computers often; can sit down at any computer, load a webpage and play it also great for casual gamers or less knowledgeable gamers who are quite happy with playing games in a browser but don't want to install more things to their computer written in a high-level language which I am more familiar with than C++ - but at the same time, I would like to improve my skills with C++ as it is probably where I am headed in the game industry once I get out of school... easier update process: reload the page. Desktop games using good ol' C++ and OpenGL 100% market penetration, assuming complete cross-platform; however, that number reduces when you consider how many people will go through downloading and installing an executable compared to just browsing to a webpage and hitting "yes" to a security warning. more trouble to maintain the cross-platform; but again, for learning purposes I would embrace the challenge and the knowledge I would gain better performance all around true full screen, whereas browser games often struggle with smooth full screen graphics (especially on Linux, in my experience) can take advantage of distribution platforms such as Steam more likely to be considered a "real" game, whereas browser and Java games are often dismissed as not being real games and therefore not played by "hardcore gamers" installer can be large; don't have to worry so much about download times Is there a way to have the best of both worlds? I love Java applets, but I also really like the reasons to write a desktop game. I don't want to constantly port everything between a Java applet project and a C++ project; that would be twice the work! Unity chose to write their own web player plugin. I don't like this, because I am one of the people that will not install their web player for anything, and I don't see myself being able to convince my audience to install a browser plugin. What are my options? Are there other examples out there besides Unity, of games that have browser and desktop versions? Did I leave out anything in the pro/con lists above?

    Read the article

  • gcc optimization? bug? and its practial implication to project

    - by kumar_m_kiran
    Hi All, My questions are divided into three parts Question 1 Consider the below code, #include <iostream> using namespace std; int main( int argc, char *argv[]) { const int v = 50; int i = 0X7FFFFFFF; cout<<(i + v)<<endl; if ( i + v < i ) { cout<<"Number is negative"<<endl; } else { cout<<"Number is positive"<<endl; } return 0; } No specific compiler optimisation options are used or the O's flag is used. It is basic compilation command g++ -o test main.cpp is used to form the executable. The seemingly very simple code, has odd behaviour in SUSE 64 bit OS, gcc version 4.1.2. The expected output is "Number is negative", instead only in SUSE 64 bit OS, the output would be "Number is positive". After some amount of analysis and doing a 'disass' of the code, I find that the compiler optimises in the below format - Since i is same on both sides of comparison, it cannot be changed in the same expression, remove 'i' from the equation. Now, the comparison leads to if ( v < 0 ), where v is a constant positive, So during compilation itself, the else part cout function address is added to the register. No cmp/jmp instructions can be found. I see that the behaviour is only in gcc 4.1.2 SUSE 10. When tried in AIX 5.1/5.3 and HP IA64, the result is as expected. Is the above optimisation valid? Or, is using the overflow mechanism for int not a valid use case? Question 2 Now when I change the conditional statement from if (i + v < i) to if ( (i + v) < i ) even then, the behaviour is same, this atleast I would personally disagree, since additional braces are provided, I expect the compiler to create a temporary built-in type variable and them compare, thus nullify the optimisation. Question 3 Suppose I have a huge code base, an I migrate my compiler version, such bug/optimisation can cause havoc in my system behaviour. Ofcourse from business perspective, it is very ineffective to test all lines of code again just because of compiler upgradation. I think for all practical purpose, these kinds of error are very difficult to catch (during upgradation) and invariably will be leaked to production site. Can anyone suggest any possible way to ensure to ensure that these kind of bug/optimization does not have any impact on my existing system/code base? PS : When the const for v is removed from the code, then optimization is not done by the compiler. I believe, it is perfectly fine to use overflow mechanism to find if the variable is from MAX - 50 value (in my case).

    Read the article

  • Jumping into argv?

    - by jth
    Hi, I`am experimenting with shellcode and stumbled upon the nop-slide technique. I wrote a little tool that takes buffer-size as a parameter and constructs a buffer like this: [ NOP | SC | RET ], with NOP taking half of the buffer, followed by the shellcode and the rest filled with the (guessed) return address. Its very similar to the tool aleph1 described in his famous paper. My vulnerable test-app is the same as in his paper: int main(int argc, char **argv) { char little_array[512]; if(argc>1) strcpy(little_array,argv[1]); return 0; } I tested it and well, it works: jth@insecure:~/no_nx_no_aslr$ ./victim $(./exploit 604 0) $ exit But honestly, I have no idea why. Okay, the saved eip was overwritten as intended, but instead of jumping somewhere into the buffer, it jumped into argv, I think. gdb showed up the following addresses before strcpy() was called: (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x80483ed in main (victim.c:7); saved eip 0x154b56 source language c. Arglist at 0xbffff1e8, args: argc=2, argv=0xbffff294 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec Address of little_array: (gdb) print &little_array[0] $1 = 0xbfffefe8 "\020" After strcpy(): (gdb) i f Stack level 0, frame at 0xbffff1f0: eip = 0x804840d in main (victim.c:10); saved eip 0xbffff458 source language c. Arglist at 0xbffff1e8, args: argc=-1073744808, argv=0xbffff458 Locals at 0xbffff1e8, Previous frame's sp is 0xbffff1f0 Saved registers: ebp at 0xbffff1e8, eip at 0xbffff1ec So, what happened here? I used a 604 byte buffer to overflow little_array, so he certainly overwrote saved ebp, saved eip and argc and also argv with the guessed address 0xbffff458. Then, after returning, EIP pointed at 0xbffff458. But little_buffer resides at 0xbfffefe8, that`s a difference of 1136 byte, so he certainly isn't executing little_array. I followed execution with the stepi command and well, at 0xbffff458 and onwards, he executes NOPs and reaches the shellcode. I'am not quite sure why this is happening. First of all, am I correct that he executes my shellcode in argv, not little_array? And where does the loader(?) place argv onto the stack? I thought it follows immediately after argc, but between argc and 0xbffff458, there is a gap of 620 bytes. How is it possible that he successfully "lands" in the NOP-Pad at Address 0xbffff458, which is way above the saved eip at 0xbffff1ec? Can someone clarify this? I have actually no idea why this is working. My test-machine is an Ubuntu 9.10 32-Bit Machine without ASLR. victim has an executable stack, set with execstack -s. Thanks in advance.

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >