Search Results

Search found 24117 results on 965 pages for 're write'.

Page 114/965 | < Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >

  • Indy 10 - IdSMTP.Send() hangs when sending messages from GMail account

    - by LukLed
    I am trying to send an e-mail using gmail account (Delphi 7, Indy 10) with these settings: TIdSmtp: Port = 587; UseTLS := utUseExplicitTLS; TIdSSLIOHandlerSocketOpenSSL: SSLOptions.Method := sslvTLSv1; Everything seems to be set ok. I get this response: Resolving hostname smtp.gmail.com. Connecting to 74.125.77.109. SSL status: "before/connect initialization" SSL status: "before/connect initialization" SSL status: "SSLv3 write client hello A" SSL status: "SSLv3 read server hello A" SSL status: "SSLv3 read server certificate A" SSL status: "SSLv3 read server done A" SSL status: "SSLv3 write client key exchange A" SSL status: "SSLv3 write change cipher spec A" SSL status: "SSLv3 write finished A" SSL status: "SSLv3 flush data" SSL status: "SSLv3 read finished A" SSL status: "SSL negotiation finished successfully" SSL status: "SSL negotiation finished successfully" Cipher: name = RC4-MD5; description = RC4-MD5 SSLv3 Kx=RSA Au=RSA Enc=RC4(128) Mac=MD5 ; bits = 128; version = TLSv1/SSLv3; And then it hangs and doesn't finish. E-mail is not sent. What can be the problem?

    Read the article

  • How to Process Lambda Expressions Passed as Argument Into Method - C# .NET 3.5

    - by Sunday Ironfoot
    My knowledge of Lambda expressions is a bit shaky, while I can write code that uses Lambda expressions (aka LINQ), I'm trying to write my own method that takes a few arguments that are of type Lambda Expression. Background: I'm trying to write a method that returns a Tree Collection of objects of type TreeItem from literally ANY other object type. I have the following so far: public class TreeItem { public string Id { get; set; } public string Text { get; set; } public TreeItem Parent { get; protected set; } public IList<TreeItem> Children { get { // Implementation that returns custom TreeItemCollection type } } public static IList<TreeItem> GetTreeFromObject<T>(IList<T> items, Expression<Func<T, string>> id, Expression<Func<T, string>> text, Expression<Func<T, IList<T>>> childProperty) where T : class { foreach (T item in items) { // Errrm!?? What do I do now? } return null; } } ...which can be called via... IList<TreeItem> treeItems = TreeItem.GetTreeFromObject<Category>( categories, c => c.Id, c => c.Name, c => c.ChildCategories); I could replace the Expressions with string values, and just use reflection, but I'm trying to avoid this as I want to make it strongly typed. My reasons for doing this is that I have a control that accepts a List of type TreeItem, whereas I have dozens of different types that are all in a tree like structure, and don't want to write seperate conversion methods for each type (trying to adhere to the DRY principle). Am I going about this the right way? Is there a better way of doing this perhaps?

    Read the article

  • ASP Classic page quit working

    - by justSteve
    I've had a set of legacy pages running on my IIS7 server for at least a year. Sometime last week something changed and now this line: Response.Write CStr(myRS(0).name) & "=" & Cstr(myRS(0).value) which used to return nothing more exciting than the string: 'Updated=true' (the sproc processing input params, stores them to a table, checks for errors and when that's all done returns a success code by executing this statement: select 'true' as [Updated] Now my pageside error handler is being involved and offers: myError=Error from /logQuizScore.asp Error source: Microsoft VBScript runtime error Error number: 13 Error description: Type mismatch Important to note that all lots of pages use the same framework - same db, same coding format, connecitonstrings and (so far as I can tell) all others are working. Troubleshot to this point: The call to the stored procedure is working correctly (stuff is stored to the given table). The output from the stored procedure is working correctly (i can execute a direct call with the given parameters and stuff works. I can see profiler calling and passing. I can replace all code with 'select 'true' as updated' and the error is the same. everything up to the response.write statement above is correct. So something changed how ADO renders that particular recordset. So i try: Response.Write myRS.Item.count and get: Error number: 424 Error description: Object required The recordset object seems not to be instantiating but the command object _did execute. Repeat - lots of other pages just the same basic logic to hit other sprocs without a problem. full code snippet set cmd1 = Server.CreateObject("ADODB.Command") cmd1.ActiveConnection = MM_cnCompliance4_STRING cmd1.CommandText = "dbo._usp_UserAnswers_INSERT" ... cmd1.CommandType = 4 cmd1.CommandTimeout = 0 cmd1.Prepared = true set myRS = cmd1.Execute Response.Write CStr(myRS(0).name) & "=" & Cstr(myRS(0).value)

    Read the article

  • SMO restore of SQL database doesn't overwrite

    - by Tom H.
    I'm trying to restore a database from a backup file using SMO. If the database does not already exist then it works fine. However, if the database already exists then I get no errors, but the database is not overwritten. The "restore" process still takes just as long, so it looks like it's working and doing a restore, but in the end the database has not changed. I'm doing this in Powershell using SMO. The code is a bit long, but I've included it below. You'll notice that I do set $restore.ReplaceDatabase = $true. Also, I use a try-catch block and report on any errors (I hope), but none are returned. Any obvious mistakes? Is it possible that I'm not reporting some error and it's being hidden from me? Thanks for any help or advice that you can give! function Invoke-SqlRestore { param( [string]$backup_file_name, [string]$server_name, [string]$database_name, [switch]$norecovery=$false ) # Get a new connection to the server [Microsoft.SqlServer.Management.Smo.Server]$server = New-SMOconnection -server_name $server_name Write-Host "Starting restore to $database_name on $server_name." Try { $backup_device = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backup_file_name, "File") # Get local paths to the Database and Log file locations If ($server.Settings.DefaultFile.Length -eq 0) {$database_path = $server.Information.MasterDBPath } Else { $database_path = $server.Settings.DefaultFile} If ($server.Settings.DefaultLog.Length -eq 0 ) {$database_log_path = $server.Information.MasterDBLogPath } Else { $database_log_path = $server.Settings.DefaultLog} # Load up the Restore object settings $restore = New-Object Microsoft.SqlServer.Management.Smo.Restore $restore.Action = 'Database' $restore.Database = $database_name $restore.ReplaceDatabase = $true if ($norecovery.IsPresent) { $restore.NoRecovery = $true } Else { $restore.Norecovery = $false } $restore.Devices.Add($backup_device) # Get information from the backup file $restore_details = $restore.ReadBackupHeader($server) $data_files = $restore.ReadFileList($server) # Restore all backup files ForEach ($data_row in $data_files) { $logical_name = $data_row.LogicalName $physical_name = Get-FileName -path $data_row.PhysicalName $restore_data = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile") $restore_data.LogicalFileName = $logical_name if ($data_row.Type -eq "D") { # Restore Data file $restore_data.PhysicalFileName = $database_path + "\" + $physical_name } Else { # Restore Log file $restore_data.PhysicalFileName = $database_log_path + "\" + $physical_name } [Void]$restore.RelocateFiles.Add($restore_data) } $restore.SqlRestore($server) # If there are two files, assume the next is a Log if ($restore_details.Rows.Count -gt 1) { $restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log $restore.FileNumber = 2 $restore.SqlRestore($server) } } Catch { $ex = $_.Exception Write-Output $ex.message $ex = $ex.InnerException while ($ex.InnerException) { Write-Output $ex.InnerException.message $ex = $ex.InnerException } Throw $ex } Finally { $server.ConnectionContext.Disconnect() } Write-Host "Restore ended without any errors." }

    Read the article

  • jQuery document.ready + Asp.Net ContentPlaceholder cause Visual Studio intellisence problems

    - by Konstantin
    Hi! I want to execute JavaScript when document is ready without much syntax overhead. The idea is to use Site.Master and ContentPlaceholder: <script type="text/javascript"> $(document).ready(function () { <asp:ContentPlaceHolder ID="OnReadyScript" runat="server" /> }); </script> and in inherited pages just write plain code: <asp:Content ID="Content3" ContentPlaceHolderID="OnReadyScript" runat="server"> $("#Login").focus(); </asp:Content> It works fine but Visual Studio complains and gives warnings. Warning in master page is Expected expression at the line <asp:ContentPlaceHolder. In inherited pages warning is Could not find 'OnReadyScript' in the current master page or pages. I tried using Writer.Write in master page to render script tag and wrapping code: <% Writer.Write(@"<script type=""text/javascript"">$(document).ready(function () {"); %> <asp:ContentPlaceHolder ID="OnReadyScrit" runat="server" /> <% Writer.Write(@"});"); %> but page rendering terminates after opening script tag is rendered. Html basically ends with <script type="text/javascript"> How can I make it work?

    Read the article

  • How to find cause and of the SocketException with message that an established connection was aborted

    - by cdpnet
    Hi All, I know the similar question may have been asked many times, but I want to represent the behavior I'm seeing and find if somebody can help predict the cause of this. I am writing a windows service which connects to other windows service over TCP. There are 100 user entities of this, and 5 connections per each. These users perform their tasks using their individual connections. The application goes on withough seeing this problem for 1 or 2 days. Or sometimes show the problem right after starting (-rarely). The best run I had was like 4 to 5 days without showing this exception. And after that application died or I had to stop it for various reasons. I want to know what can be causing this? Here is the stacktrace. System.IO.IOException: Unable to write data to the transport connection: An established connection was aborted by the software in your host machine. ---> System.Net.Sockets.SocketException: An established connection was aborted by the software in your host machine at System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32 size, SocketFlags socketFlags) at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) --- End of inner exception stack trace --- at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size) at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest) at System.Net.Security.SslStream.Write(Byte[] buffer, Int32 offset, Int32 count)

    Read the article

  • PLT Scheme Memory

    - by Eric
    So I need some help with implementing a Make-memory program using Scheme. I need two messages 'write and 'read. So it would be like (mymem 'write 34 -116) and (mymem 'read 99) right? and (define mymem (make-memory 100)).....How would I implement this in scheme? using an Alist???I need some help coding it. I have this code which makes make-memory a procedure and when you run mymem you get ((99.0)) and what i need to do is recur this so i get an alist with dotted pairs to ((0.0)). So any suggestions on how to code this?? Does anyone have any ideas what I could do to recur and make messages Write and read?? (define make-memory (lambda (n) (letrec ((mem '()) (dump (display mem))) (lambda () (if (= n 0) (cons (cons n 0) mem) mem) (cons (cons (- n 1) 0) mem)) (lambda (msg loc val) (cond ((equal? msg 'read) (display (cons n val))(set! n (- n 1))) ((equal? msg 'write) (set! mem (cons val loc)) (set! n (- n 1)) (display mem))))))) (define mymem (make-memory 100)) Yes this is an assignment but I wrote this code. I just need some help or direction. And yes I do know about variable-length argument lists.

    Read the article

  • Understanding WordProcessingML tags and avoid unnecessary tags

    - by rithanyalaxmi
    Hi, I am using MS Word API to generate .docx which contains the data fetched from DB, in which i am applying the respective styles, fonts, symbols, etc. If the data fetched from the DB is quite huge, then there is a problem in displaying those data in the .docx file. I found that internally MS Word 2007 will write some content through tags which may not be needed to display the data. Hence i am figuring out what are the necessary MS Word tags needed when converting into a .xml file. So that i can avoid unnecessary tags and build only the respective tags which are needed to display the data. Hence i am planning to write my own .xml with the MS Word tags which are needed, than generating a .XML from .docx file My queries are:- 1) Whether it is right that the MS Word will generate some tags which may not be needed during the conversion of .docx to document.xml? That makes it heavy? If so what are the tags , so that i can avoid them when write by own .xml file. 2) Please send links to understand about the MS Word tags and its advantages, which tags are needed and which are not ? 3) Whether my approach to write a new .xml similar to document.xml (.docx conversion) is worthy one to go forward so that i can build the .xml with the tags i needed , so that i can improve the performance of the data display? Please shed some light into it and thanks in advance.. Thanks, Rithu

    Read the article

  • mnesia primary key

    - by maryjanne
    Hi I have two tables one notes and one tag and I want to make the id from notes primary key to use it in the tag table, but I don't know where I do wrong. My notes id is generate from another table counter, with the function dirty_update_counter. My function for the id_notes from tag looks like this: Fun = fun() -> mnesia:write(#tag{ id_note =0}) end, mnesia:transaction(Fun). generate_Oid(TableName) when is_atom(TableName) -> F = fun() -> [Oid] = mnesia:read(tag, TableName, write), NewId = Oid#tag.id_note+1, New = Oid#tag{id_note = NewId}, mnesia:write(New), NewId end, mnesia:transaction(F). insert_n(N) when is_record(N, note) -> F = fun() -> {atomic, Id} = generate_Oid(note), New = N#note{id = Id}, mnesia:write(New), New end, mnesia:transaction(F). find_n(Id) when is_integer(Id) -> {atomic, [N]} = mnesia:transaction(fun() -> mnesia:read({note, Id}) end), N. But this function don't increment my field id_note from the table tag, despite the fact that in my note table, my id field is incremented from counter table. Thanks in advance for any help.

    Read the article

  • Java / Groovy Socket - Detecting the socket being closed in a non-blocking way

    - by John Arrowwood
    I'm trying to create a small HTTP proxy that can re-write the request/headers as needed to suit my requirements. If one already exists, please, point me to it. Otherwise... I've written something that ALMOST works. It can do the proxy function, but not the re-write (yet). Problem is, I can't detect when the remote socket has been closed down without doing a blocking read. It is CRITICAL for the functionality of this thing that it be able to detect the socket being closed without blocking. I have SCOURED the Java API documentation, and I can't find ANY indication that it is even possible. Here's what I have: while ( this.inbound.isConnected() && this.outbound.isConnected() ) { try { while ( ( available = readFromClient.available() ) != 0 ) { if ( available > 1024 ) available = 1024 bytesRead = readFromClient.read( buffer, 0, available ) writeToServer.write( buffer, 0, bytesRead ) } while ( ( available = readFromServer.available() ) != 0 ) { if ( available > 1024 ) available = 1024 bytesRead = readFromServer.read( buffer, 0, available ) writeToClient.write( buffer, 0, bytesRead ) } } catch (e) { print e } println "Connected: " + this.inbound.isConnected() println "Bound: " + this.inbound.isBound() println "InputShutdown: " + this.inbound.isInputShutdown() println "OutputShutdown: " + this.inbound.isOutputShutdown() print "\n"; Thread.sleep( 10 ) } The tests for the socket being closed never indicate that the socket was closed. And, as I mentioned, I can't find ANY examples of how to detect the 'END OF FILE' condition on the stream without doing a blocking read. There HAS to be a way. Does anyone here know what it is?

    Read the article

  • What is the recommended coding style for PowerShell?

    - by stej
    Is there any recommended coding style how to write PowerShell scripts? It's not about how to structure the code (how many functions, if to use module, ...). It's about 'how to write the code so that it is readable'. In programming languages there are some recommended coding styles (what to indent, how to indent - spaces/tabs, where to make new line, where to put braces,...), but I haven't seen any suggestion for PowerShell. What I'm interested particularly in: How to write parameters function New-XYZItem ( [string] $ItemName , [scriptblock] $definition ) { ... (I see that it's more like 'V1' syntax) or function New-PSClass { param([string] $ClassName ,[scriptblock] $definition )... or (why to add empty attribute?) function New-PSClass { param([Parameter()][string] $ClassName ,[Parameter()][scriptblock] $definition )... or (other formatting I saw maybe in Jaykul's code) function New-PSClass { param( [Parameter()] [string] $ClassName , [Parameter()] [scriptblock] $definition )... or ..? How to write complex pipeline Get-SomeData -param1 abc -param2 xyz | % { $temp1 = $_ 1..100 | % { Process-somehow $temp1 $_ } } | % { Process-Again $_ } | Sort-Object -desc or (name of cmdlet on new line) Get-SomeData -param1 abc -param2 xyz | % { $temp1 = $_ 1..100 | % { Process-somehow $temp1 $_ } } | % { Process-Again $_ } | Sort-Object -desc | and what if there are -begin -process -end params? how to make it the most readable? Get-SomeData -param1 abc -param2 xyz | % -begin { init } -process { Process-somehow2 ... } -end { Process-somehow3 ... } | % -begin { } .... or Get-SomeData -param1 abc -param2 xyz | % ` -begin { init } ` -process { Process-somehow2 ... } ` -end { Process-somehow3 ... } | % -begin { } .... the indentitation is important here and what element is put on new line as well. I have covered only questions that come on my mind very frequently. There are some others, but I'd like to keep this SO question 'short'. Any other suggestions are welcome.

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

  • SWI-Prolog as an embedded application in VS2008 C++

    - by H.J. Miri
    I have a C++ application (prog.sln) which is my main program and I want to use Prolog as an embedded logic server (hanoi.pl) int main(int argc, char** argv) { putenv( "SWI_HOME_DIR=C:\\Program Files\\pl" ); argv[0] = "libpl.dll"; argv[1] = "-x"; argv[2] = "hanoi.pl"; argv[3] = NULL; PL_initialise(argc, argv); { fid_t fid = PL_open_foreign_frame(); predicate_t pred = PL_predicate( "hanoi", 1, "user" ); int rval = PL_call_predicate( NULL, PL_Q_NORMAL, pred, 3 ); qid_t qid = PL_open_query( NULL, PL_Q_NORMAL, pred, 3 ); while ( PL_next_solution(qid) ) return qid; PL_close_query(qid); PL_close_foreign_frame(fid); system("PAUSE"); } return 1; system("PAUSE"); } Here is my Prolog source code: :- use_module( library(shlib) ). hanoi( N ):- move( N, left, center, right ). move( 0, _, _, _ ):- !. move( N, A, B, C ):- M is N-1, move( M, A, C, B ), inform( A, B ), move( M, C, B, A ). inform( X, Y ):- write( 'move a disk from ' ), write( X ), write( ' to ' ), write( Y ), nl. I type at the command: plld -o hanoi prog.sln hanoi.pl But it doesn't compile. When I run my C++ app, it says: Undefined procedure: hanoi/1 What is missing/wrong in my code or at the prompt that prevents my C++ app from consulting Prolog? Appreciate any help/pointer.

    Read the article

  • Not able to use 7-Zip to compress stdin and output with stdout?

    - by acidzombie24
    I get the error "Not implemented". I want to compress a file using 7-Zip via stdin then take the data via stdout and do more conversions with my application. In the man page it shows this example: % echo foo | 7z a dummy -tgzip -si -so /dev/null I am using Windows and C#. Results: 7-Zip 4.65 Copyright (c) 1999-2009 Igor Pavlov 2009-02-03 Creating archive StdOut System error: Not implemented Code: public static byte[] a7zipBuf(byte[] b) { string line; var p = new Process(); line = string.Format("a dummy -t7z -si -so "); p.StartInfo.Arguments = line; p.StartInfo.FileName = @"C:\Program Files\7-Zip\7z.exe"; p.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; p.StartInfo.CreateNoWindow = true; p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardError = true; p.StartInfo.RedirectStandardInput = true; p.Start(); p.StandardInput.BaseStream.Write(b, 0, b.Length); p.StandardInput.Close(); Console.Write(p.StandardError.ReadToEnd()); //Console.Write(p.StandardOutput.ReadToEnd()); return p.StandardOutput.BaseStream.ReadFully(); } Is there another simple way to read the file into memory? Right now I can 1) write to a temporary file and read (easy and can copy/paste some code) 2) use a file pipe (medium? I have never done it) 3) Something else.

    Read the article

  • How to replace openSSL calls with C# code?

    - by fonix232
    Hey there again! Today I ran into a problem when I was making a new theme creator for chrome. As you may know, Chrome uses a "new" file format, called CRX, to manage it's plugins and themes. It is a basic zip file, but a bit modified: "Cr24" + derkey + signature + zipFile And here comes the problem. There are only two CRX creators, written in Ruby or Python. I don't know neither language too much (had some basic experience in Python though, but mostly with PyS60), so I would like to ask you to help me convert this python app to a C# code that doesn't depend on external programs. Also, here is the source of crxmake.py: #!/usr/bin/python # Cribbed from http://github.com/Constellation/crxmake/blob/master/lib/crxmake.rb # and http://src.chromium.org/viewvc/chrome/trunk/src/chrome/tools/extensions/chromium_extension.py?revision=14872&content-type=text/plain&pathrev=14872 # from: http://grack.com/blog/2009/11/09/packing-chrome-extensions-in-python/ import sys from array import * from subprocess import * import os import tempfile def main(argv): arg0,dir,key,output = argv # zip up the directory input = dir + ".zip" if not os.path.exists(input): os.system("cd %(dir)s; zip -r ../%(input)s . -x '.svn/*'" % locals()) else: print "'%s' already exists using it" % input # Sign the zip file with the private key in PEM format signature = Popen(["openssl", "sha1", "-sign", key, input], stdout=PIPE).stdout.read(); # Convert the PEM key to DER (and extract the public form) for inclusion in the CRX header derkey = Popen(["openssl", "rsa", "-pubout", "-inform", "PEM", "-outform", "DER", "-in", key], stdout=PIPE).stdout.read(); out=open(output, "wb"); out.write("Cr24") # Extension file magic number header = array("l"); header.append(2); # Version 2 header.append(len(derkey)); header.append(len(signature)); header.tofile(out); out.write(derkey) out.write(signature) out.write(open(input).read()) os.unlink(input) print "Done." if __name__ == '__main__': main(sys.argv) Please could you help me?

    Read the article

  • Texture2D problem

    - by Anders Karlsson
    I have a problem that is driving me crazy, I want to write a number of texts on the screen using Texture2D however I only seem to be able to write the first one. If I individually write one of the labels it works but not if I write all of them, only the first label is displayed. Let me show some code: -(void)drawText:(NSString*)theString AtX:(float)X Y:(float)Y withFont:(UIFont*)aFont { // set color glColor4f(1, 0, 0, 1.0); // Enable modes needed for drawing glEnableClientState(GL_TEXTURE_COORD_ARRAY); Texture2D* textTexture = [[Texture2D alloc] initWithString:theString dimensions:viewSize // 320x480 alignment:UITextAlignmentLeft font:aFont]; glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); [textTexture drawInRect:CGRectMake(X,Y,1,1)]; glDisableClientState(GL_TEXTURE_COORD_ARRAY); [textTexture release]; } When I call this drawText once it seems to display the text properly, but if I call it a second time nothing seems to be displayed. Somebody has an idea what it could be? The states like GL_BLEND and GL_TEXTURE_2D have been enabled in the view setup function. In the Texture2D the dimensions are 512x512 as I pass the whole screen to function. If I don't pass that the text gets enlarged and fuzzy. I am a bit uncertain about that parameter. TIA for any help.

    Read the article

  • Powershell / .Net: Get a reference to an object returned by a method

    - by Dan Menes
    I am teaching myself PowerShell by writing a simple parser. I use the .Net framework class Collections.Stack. I want to modify the object at the top of the stack in place. I know I can pop() the object off, modify it, and then push() it back on, but that strikes me as inelegant. First, I tried this: $stk = new-object Collections.Stack $stk.push( (,'My first value') ) ( $stk.peek() ) += ,'| My second value' Which threw an error: Assignment failed because [System.Collections.Stack] doesn't contain a settable property 'peek()'. At C:\Development\StackOverflow\PowerShell-Stacks\test.ps1:3 char:12 + ( $stk.peek <<<< () ) += ,'| My second value' + CategoryInfo : InvalidOperation: (peek:String) [], RuntimeException + FullyQualifiedErrorId : ParameterizedPropertyAssignmentFailed Next I tried this: $ary = $stk.peek() $ary += ,'| My second value' write-host "Array is: $ary" write-host "Stack top is: $($stk.peek())" Which prevented the error but still didn't do the right thing: Array is: My first value | My second value Stack top is: My first value Clearly, what is getting assigned to $ary is a copy of the object at the top of the stack, so when I the object in $ary, the object at the top of the stack remains unchanged. Finally, I read up on teh [ref] type, and tried this: $ary_ref = [ref]$stk.peek() $ary_ref.value += ,'| My second value' write-host "Referenced array is: $($ary_ref.value)" write-host "Stack top is still: $($stk.peek())" But still no dice: Referenced array is: My first value | My second value Stack top is still: My first value I assume the peek() method returns a reference to the actual object, not the clone. If so, then the reference appears to be being replaced by a clone by PowerShell's expression processing logic. Can somebody tell me if there is a way to do what I want to do? Or do I have to revert to pop() / modify / push()?

    Read the article

  • Unit tests for deep cloning

    - by Will Dean
    Let's say I have a complex .NET class, with lots of arrays and other class object members. I need to be able to generate a deep clone of this object - so I write a Clone() method, and implement it with a simple BinaryFormatter serialize/deserialize - or perhaps I do the deep clone using some other technique which is more error prone and I'd like to make sure is tested. OK, so now (ok, I should have done it first) I'd like write tests which cover the cloning. All the members of the class are private, and my architecture is so good (!) that I haven't needed to write hundreds of public properties or other accessors. The class isn't IComparable or IEquatable, because that's not needed by the application. My unit tests are in a separate assembly to the production code. What approaches do people take to testing that the cloned object is a good copy? Do you write (or rewrite once you discover the need for the clone) all your unit tests for the class so that they can be invoked with either a 'virgin' object or with a clone of it? How would you test if part of the cloning wasn't deep enough - as this is just the kind of problem which can give hideous-to-find bugs later?

    Read the article

  • ASP Classic, SQL 2008, XML Output, and DSN vs DSN-Less Produces Chinese Letters

    - by RoLYroLLs
    I've been having a problem for the past month and can't seem to figure out what is wrong. Here's the setup and a little background. Background: I have a web-host who was running my website on Windows Server 2003 and SQL Server 2000. One of my webpages returned a result set from a stored procedure from the SQL server as xml. Below is the code: Stored Procedure: select top 10 1 as tag , null as parent , column1 as [item!1!column1!element] , column2 as [item!1!column2!element] from table1 for XML EXPLICIT ASP Page: index.asp Call OpenConn Set cmd = Server.CreateObject("ADODB.Command") With cmd .ActiveConnection = dbc .CommandText = "name of proc" .CommandType = adCmdStoredProc .Parameters.Append .CreateParameter("@RetVal", adInteger, adParamReturnValue, 4) .Parameters.Append .CreateParameter("@Level", adInteger, adParamInput, 4, Level) End With Set rsItems = Server.CreateObject("ADODB.Recordset") With rsItems .CursorLocation = adUseClient .CursorType = adOpenStatic .LockType = adLockBatchOptimistic Set .Source = cmd .Open Set .ActiveConnection = Nothing End With If NOT rsItems.BOF AND NOT rsItems.EOF Then OutputXMLQueryResults rsItems,"items" End If Set rsItems = Nothing Set cmd = Nothing Call CloseConn Sub OpenConn() strConn = "Provider=SQLOLEDB;Data Source=[hidden];User Id=[hidden];Password=[hidden];Initial Catalog=[hidden];" Set dbc = Server.CreateObject("ADODB.Connection") dbc.open strConn End Sub Sub CloseConn() If IsObject(dbc) Then If dbc.State = adStateOpen Then dbc.Close End If Set dbc = Nothing End If End Sub Sub OutputXMLQueryResults(RS,RootElementName) Response.Clear Response.ContentType = "text/xml" Response.Codepage = 65001 Response.Charset = "utf-8" Response.Write "" Response.Write "" While Not RS.EOF Response.Write RS(0).Value RS.MoveNext WEnd Response.Write "" Response.End End Sub Present: All was working great, until my host upgraded to Windows Server 2008 and SQL Server 2008. All of the sudden I was getting results like this: From Browser: From View Source: However, I found that if I use a DSN connection strConn = "DSN=[my DSN Name];User Id=[hidden];Password=[hidden];Initial Catalog=[hidden];" it works perfectly fine! My current host is not going to support DSN any longer, but that's out of scope for this issue. Someone told me to use an ADO.Stream object instead of a Recordset object, but I'm unsure how to implement that. Question: Has anyone run into this and found a way to fix it? What about that ADO.Stream object, can someone help me with a sample that would fit my code?

    Read the article

  • Reading UTF-8 XML and writing it to a file with Python

    - by Harri
    I'm trying to parse UTF-8 XML file and save some parts of it to another file. Problem is, that this is my first Python script ever and I'm totally confused about the character encoding problems I'm finding. My script fails immediately when it tries to write non-ascii character to a file, but it can print it to command prompt (at least in some level) Here's the XML (from the parts that matter at least, it's a *.resx file which contains UI strings) <?xml version="1.0" encoding="utf-8"?> <root> <resheader name="foo"> <value>bar</value> </resheader> <data name="lorem" xml:space="preserve"> <value>ipsum öä</value> </data> </root> And here's my python script from xml.dom.minidom import parse names = [] values = [] def getStrings(path): dom = parse(path) data = dom.getElementsByTagName("data") for i in range(len(data)): name = data[i].getAttribute("name") names.append(name) value = data[i].getElementsByTagName("value") values.append(value[0].firstChild.nodeValue.encode("utf-8")) def writeToFile(): with open("uiStrings-fi.py", "w") as f: for i in range(len(names)): line = names[i] + '="'+ values[i] + '"' #varName='varValue' f.write(line) f.write("\n") getStrings("ResourceFile.fi-FI.resx") writeToFile() And here's the traceback: Traceback (most recent call last): File "GenerateLanguageFiles.py", line 24, in writeToFile() File "GenerateLanguageFiles.py", line 19, in writeToFile line = names[i] + '="'+ values[i] + '"' #varName='varValue' UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in ran ge(128) How should I fix my script so it would read and write UTF-8 characters properly? The files I'm trying to generate would be used in test automation with Robots Framework.

    Read the article

  • WCF and streaming requests and responses

    - by Cheeso
    Is it correct that in WCF, I cannot have a service write to a stream that is received by the client? My understanding is that streaming is supported in WCF for requests, responses, or both. Is it true that in all cases, the receiver of the stream must invoke Read ? I would like to support a scenario where the receiver of the stream can Write on it. Is this supported? Let me show it this way. The simplest example of Streaming in WCF is the service returning a FileStream to a client. This is a streamed response. The server code is like this: [ServiceContract] public interface IStreamService { [OperationContract] Stream GetData(string fileName); } public class StreamService : IStreamService { public Stream GetData(string filename) { FileStream fs = new FileStream(filename, FileMode.Open) return fs; } } And the client code is like this: StreamDemo.StreamServiceClient client = new WcfStreamDemoClient.StreamDemo.StreamServiceClient(); Stream str = client.GetData(@"c:\path\to\myfile.dat"); do { b = str.ReadByte(); //read next byte from stream ... } while (b != -1); (example taken from http://blog.joachim.at/?p=33) Clear, right? The server returns the Stream to the client, and the client invokes Read on it. Is it possible for the client to provide a Stream, and the server to invoke Write on it? In other words, rather than a pull model - where the client pulls data from the server - it is a push model, where the client provides the "sink" stream and the server writes into it. Is this possible in WCF, and if so, how? What are the config settings required for the binding, interface, etc? The analogy is the Response.OutputStream from an ASP.NET request. In ASPNET, any page can invoke Write on the output stream, and the content is received by the client. Can I do something similar in WCF? Thanks.

    Read the article

  • Writing XML in different character encodings with Java

    - by Roman Myers
    I am attempting to write an XML library file that can be read again into my program. The file writer code is as follows: XMLBuilder builder = new XMLBuilder(); Document doc = builder.build(bookList); DOMImplementation impl = doc.getImplementation(); DOMImplementationLS implLS = (DOMImplementationLS) impl.getFeature("LS", "3.0"); LSSerializer ser = implLS.createLSSerializer(); String out = ser.writeToString(doc); //System.out.println(out); try{ FileWriter fstream = new FileWriter(location); BufferedWriter outwrite = new BufferedWriter(fstream); outwrite.write(out); outwrite.close(); }catch (Exception e){ } The above code does write an xml document. However, in the XML header, it is an attribute that the file is encoded in UTF-16. when i read in the file, i get the error: "content not allowed in prolog" this error does not occur when the encoding attribute is manually changed to UTF-8. I am trying to get the above code to write an XML document encoded in UTF-8, or successfully parse a UTF-16 file. the code for parsing in is DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance(); DocumentBuilder loader = factory.newDocumentBuilder(); Document document = loader.parse(filename); the last line returns the error.

    Read the article

  • Scheme - Memory System

    - by Eric
    I am trying to make a memory system where you input something in a slot of memory. So what I am doing is making an Alist and the car of the pairs is the memory location and the cdr is the val. I need the program to understand two messages, Read and Write. Read just displaying the memory location selected and the val that is assigned to that location and write changes the val of the location or address. How do I make my code so it reads the location you want it to and write to the location you want it to? Feel free to test this yourself. Any help would be much appreciated. This is what I have: (define make-memory (lambda (n) (letrec ((mem '()) (dump (display mem))) (lambda () (if (= n 0) (cons (cons n 0) mem) mem) (cons (cons (- n 1) 0) mem)) (lambda (msg loc val) (cond ((equal? msg 'read) (display (cons n val))(set! n (- n 1))) ((equal? msg 'write) (set! mem (cons val loc)) (set! n (- n 1)) (display mem))))))) (define mymem (make-memory 100))

    Read the article

  • Rijndael managed: plaintext length detction

    - by sheepsimulator
    I am spending some time learning how to use the RijndaelManaged library in .NET, and developed the following function to test encrypting text with slight modifications from the MSDN library: Function encryptBytesToBytes_AES(ByVal plainText As Byte(), ByVal Key() As Byte, ByVal IV() As Byte) As Byte() ' Check arguments. If plainText Is Nothing OrElse plainText.Length <= 0 Then Throw New ArgumentNullException("plainText") End If If Key Is Nothing OrElse Key.Length <= 0 Then Throw New ArgumentNullException("Key") End If If IV Is Nothing OrElse IV.Length <= 0 Then Throw New ArgumentNullException("IV") End If ' Declare the RijndaelManaged object ' used to encrypt the data. Dim aesAlg As RijndaelManaged = Nothing ' Declare the stream used to encrypt to an in memory ' array of bytes. Dim msEncrypt As MemoryStream = Nothing Try ' Create a RijndaelManaged object ' with the specified key and IV. aesAlg = New RijndaelManaged() aesAlg.BlockSize = 128 aesAlg.KeySize = 128 aesAlg.Mode = CipherMode.ECB aesAlg.Padding = PaddingMode.None aesAlg.Key = Key aesAlg.IV = IV ' Create a decrytor to perform the stream transform. Dim encryptor As ICryptoTransform = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV) ' Create the streams used for encryption. msEncrypt = New MemoryStream() Using csEncrypt As New CryptoStream(msEncrypt, encryptor, CryptoStreamMode.Write) Using swEncrypt As New StreamWriter(csEncrypt) 'Write all data to the stream. swEncrypt.Write(plainText) End Using End Using Finally ' Clear the RijndaelManaged object. If Not (aesAlg Is Nothing) Then aesAlg.Clear() End If End Try ' Return the encrypted bytes from the memory stream. Return msEncrypt.ToArray() End Function Here's the actual code I am calling encryptBytesToBytes_AES() with: Private Sub btnEncrypt_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnEncrypt.Click Dim bZeroKey As Byte() = {&H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0, &H0} PrintBytesToRTF(encryptBytesToBytes_AES(bZeroKey, bZeroKey, bZeroKey)) End Sub However, I get an exception thrown on swEncrypt.Write(plainText) stating that the 'Length of the data to encrypt is invalid.' However, I know that the size of my key, iv, and plaintext are 16 bytes == 128 bits == aesAlg.BlockSize. Why is it throwing this exception? Is it because the StreamWriter is trying to make a String (ostensibly with some encoding) and it doesn't like &H0 as a value?

    Read the article

  • How to pass filename to StandardInput (Process) in C#?

    - by Cosmo
    Hello Guys! I'm using the native windows application spamc.exe (SpamAssassin - sawin32) from command line as follows: C:\SpamAssassin\spamc.exe -R < C:\email.eml Now I'd like to call this process from C#: Process p = new Process(); p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardInput = true; p.StartInfo.FileName = @"C:\SpamAssassin\spamc.exe"; p.StartInfo.Arguments = @"-R"; p.Start(); p.StandardInput.Write(@"C:\email.eml"); p.StandardInput.Close(); Console.Write(p.StandardOutput.ReadToEnd()); p.WaitForExit(); p.Close(); The above code just passes the filename as string to spamc.exe (not the content of the file). However, this one works: Process p = new Process(); p.StartInfo.UseShellExecute = false; p.StartInfo.RedirectStandardOutput = true; p.StartInfo.RedirectStandardInput = true; p.StartInfo.FileName = @"C:\SpamAssassin\spamc.exe"; p.StartInfo.Arguments = @"-R"; p.Start(); StreamReader sr = new StreamReader(@"C:\email.eml"); string msg = sr.ReadToEnd(); sr.Close(); p.StandardInput.Write(msg); p.StandardInput.Close(); Console.Write(p.StandardOutput.ReadToEnd()); p.WaitForExit(); p.Close(); Could someone point me out why it's working if I read the file and pass the content to spamc, but doesn't work if I just pass the filename as I'd do in windows command line?

    Read the article

< Previous Page | 110 111 112 113 114 115 116 117 118 119 120 121  | Next Page >