Search Results

Search found 15670 results on 627 pages for 'multi level'.

Page 583/627 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • Why oh why doesn't my asp.net treeview update?

    - by Brendan
    I'm using an ASP.net treeview on a page with a custom XmlDataSource. When the user clicks on a node of the tree, a detailsview pops up and edits a bunch of things about the underlying object. All this works properly, and the underlying object gets updated in my background object-management classes. Yay! However, my treeview just isn't updating the display. Either immediately (which i would like it to), or on full page re-load (which is the minimal useful level i need it to be at). Am i subclassing XmlDataSource poorly? I really don't know. Can anyone point me in a good direction? Thanks! The markup looks about like this (chaff removed): <data:DefinitionDataSource runat="server" ID="DefinitionTreeSource" RootDefinitionID="uri:1"></data:DefinitionDataSource> <asp:TreeView ID="TreeView" runat="server" DataSourceID="DefinitionTreeSource"> <DataBindings> <asp:TreeNodeBinding DataMember="definition" TextField="name" ValueField="id" /> </DataBindings> </asp:TreeView> <asp:DetailsView ID="DetailsView1" runat="server" AutoGenerateRows="False" DataKeyNames="Id" DataSourceID="DefinitionSource" DefaultMode="Edit"> <Fields> <asp:BoundField DataField="Name" HeaderText="Name" HeaderStyle-Wrap="false" SortExpression="Name" /> <asp:CommandField ShowCancelButton="False" ShowInsertButton="True" ShowEditButton="True" ButtonType="Button" /> </Fields> </asp:DetailsView> And the DefinitionTreeSource code looks like this: public class DefinitionDataSource : XmlDataSource { public string RootDefinitionID { get { if (ViewState["RootDefinitionID"] != null) return ViewState["RootDefinitionID"] as String; return null; } set { if (!Object.Equals(ViewState["RootDefinitionID"], value)) { ViewState["RootDefinitionID"] = value; DataBind(); } } } public DefinitionDataSource() { } public override void DataBind() { base.DataBind(); setData(); } private void setData() { String defXML = "<?xml version=\"1.0\" ?>"; Test.Management.TestManager.Definition root = Test.Management.TestManager.Definition.GetDefinitionById(RootDefinitionID); if (root != null) this.Data = defXML + root.ToXMLString(); else this.Data = defXML + "<definition id=\"null\" name=\"Set Root Node\" />"; } } }

    Read the article

  • How to reduce virtual memory by optimising my PHP code?

    - by iCeR
    My current code (see below) uses 147MB of virtual memory! My provider has allocated 100MB by default and the process is killed once run, causing an internal error. The code is utilising curl multi and must be able to loop with more than 150 iterations whilst still minimizing the virtual memory. The code below is only set at 150 iterations and still causes the internal server error. At 90 iterations the issue does not occur. How can I adjust my code to lower the resource use / virtual memory? Thanks! <?php function udate($format, $utimestamp = null) { if ($utimestamp === null) $utimestamp = microtime(true); $timestamp = floor($utimestamp); $milliseconds = round(($utimestamp - $timestamp) * 1000); return date(preg_replace('`(?<!\\\\)u`', $milliseconds, $format), $timestamp); } $url = 'https://www.testdomain.com/'; $curl_arr = array(); $master = curl_multi_init(); for($i=0; $i<150; $i++) { $curl_arr[$i] = curl_init(); curl_setopt($curl_arr[$i], CURLOPT_URL, $url); curl_setopt($curl_arr[$i], CURLOPT_RETURNTRANSFER, 1); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYHOST, FALSE); curl_setopt($curl_arr[$i], CURLOPT_SSL_VERIFYPEER, FALSE); curl_multi_add_handle($master, $curl_arr[$i]); } do { curl_multi_exec($master,$running); } while($running > 0); for($i=0; $i<150; $i++) { $results = curl_multi_getcontent ($curl_arr[$i]); $results = explode("<br>", $results); echo $results[0]; echo "<br>"; echo $results[1]; echo "<br>"; echo udate('H:i:s:u'); echo "<br><br>"; usleep(100000); } ?> Processor Information Total processors: 8 Processor #1 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #2 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #3 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #4 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #5 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #6 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #7 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Processor #8 Vendor GenuineIntel Name Intel(R) Xeon(R) CPU E5405 @ 2.00GHz Speed 1995.120 MHz Cache 6144 KB Memory Information Memory for crash kernel (0x0 to 0x0) notwithin permissible range Memory: 8302344k/9175040k available (2176k kernel code, 80272k reserved, 901k data, 228k init, 7466304k highmem) System Information Linux server3.server.com 2.6.18-194.17.1.el5PAE #1 SMP Wed Sep 29 13:31:51 EDT 2010 i686 i686 i386 GNU/Linux Physical Disks SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back SCSI device sda: 1952448512 512-byte hdwr sectors (999654 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 08 SCSI device sda: drive cache: write back sd 0:1:0:0: Attached scsi disk sda sd 4:0:0:0: Attached scsi removable disk sdb sd 0:1:0:0: Attached scsi generic sg4 type 0 sd 4:0:0:0: Attached scsi generic sg7 type 0 Current Memory Usage total used free shared buffers cached Mem: 8306672 7847384 459288 0 487912 6444548 -/+ buffers/cache: 914924 7391748 Swap: 4095992 496 4095496 Total: 12402664 7847880 4554784 Current Disk Usage Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 898G 307G 546G 36% / /dev/sda1 99M 19M 76M 20% /boot none 4.0G 0 4.0G 0% /dev/shm /var/tmpMnt 4.0G 1.8G 2.0G 48% /tmp

    Read the article

  • Advice on "Invalid Pointer Operation" when using complex records

    - by Xaz
    Env: Delphi 2007 <JustificationI tend to use complex records quite frequently as they offer almost all of the advantages of classes but with much simpler handling.</Justification Anyhoo, one particularly complex record I have just implemented is trashing memory (later leading to an "Invalid Pointer Operation" error). This is an example of the memory trashing code: sSignature := gProfiles.Profile[_stPrimary].Signature.Formatted(True); On the second time i call it i get "Invalid Pointer Operation" It works OK if i call it like this: AProfile := gProfiles.Profile[_stPrimary]; ASignature := AProfile.Signature; sSignature := ASignature.Formatted(True); Background Code: gProfiles: TProfiles; TProfiles = Record private FPrimaryProfileID: Integer; FCachedProfile: TProfile; ... public < much code removed > property Profile[ProfileType: TProfileType]: TProfile Read GetProfile; end; function TProfiles.GetProfile(ProfileType: TProfileType): TProfile; begin case ProfileType of _stPrimary : Result := ProfileByID(FPrimaryProfileID); ... end; end; function TProfiles.ProfileByID(iID: Integer): TProfile; begin <snip> if LoadProfileOfID(iID, FCachedProfile) then begin Result := FCachedProfile; end else ... end; TProfile = Record private ... public ... Signature: TSignature; ... end; TSignature = Record private public PlainTextFormat : string; HTMLFormat : string; // The text to insert into a message when using this profile function Formatted(bHTML: boolean): string; end; function TSignature.Formatted(bHTML: boolean): string; begin if bHTML then result := HTMLFormat else result := PlainTextFormat; < SNIP MUCH CODE > end; OK, so I have a record within a record within a record, which is approaching Inception level confusion and I'm the first to admit is not really a good model. Clearly i am going to have to restructure it. What I would like from you gurus is a better understanding of why it is trashing the memory (something to do with the string object that is created then freed...) so that i can avoid making these kinds of errors in future. Thanks

    Read the article

  • I need help converting a C# string from one character encoding to another?

    - by Handleman
    According to Spolsky I can't call myself a developer, so there is a lot of shame behind this question... Scenario: From a C# application, I would like to take a string value from a SQL db and use it as the name of a directory. I have a secure (SSL) FTP server on which I want to set the current directory using the string value from the DB. Problem: Everything is working fine until I hit a string value with a "special" character - I seem unable to encode the directory name correctly to satisfy the FTP server. The code example below uses "special" character é as an example uses WinSCP as an external application for the ftps comms does not show all the code required to setup the Process "_winscp". sends commands to the WinSCP exe by writing to the process standardinput for simplicity, does not get the info from the DB, but instead simply declares a string (but I did do a .Equals to confirm that the value from the DB is the same as the declared string) makes three attempts to set the current directory on the FTP server using different string encodings - all of which fail makes an attempt to set the directory using a string that was created from a hand-crafted byte array - which works Process _winscp = new Process(); byte[] buffer; string nameFromString = "Sinéad O'Connor"; _winscp.StandardInput.WriteLine("cd \"" + nameFromString + "\""); buffer = Encoding.UTF8.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.UTF8.GetString(buffer) + "\""); buffer = Encoding.ASCII.GetBytes(nameFromString); _winscp.StandardInput.WriteLine("cd \"" + Encoding.ASCII.GetString(buffer) + "\""); byte[] nameFromBytes = new byte[] { 83, 105, 110, 130, 97, 100, 32, 79, 39, 67, 111, 110, 110, 111, 114 }; _winscp.StandardInput.WriteLine("cd \"" + Encoding.Default.GetString(nameFromBytes) + "\""); The UTF8 encoding changes é to 101 (decimal) but the FTP server doesn't like it. The ASCII encoding changes é to 63 (decimal) but the FTP server doesn't like it. When I represent é as value 130 (decimal) the FTP server is happy, except I can't find a method that will do this for me (I had to manually contruct the string from explicit bytes). Anyone know what I should do to my string to encode the é as 130 and make the FTP server happy and finally elevate me to level 1 developer by explaining the only single thing a developer should understand?

    Read the article

  • jQuery action being called when selector isn't met?

    - by dougoftheabaci
    I've been working on a prototype for a client's web site and I've run into a rather significant snag. You can view the prototype here. As you can see, the way it works is you can scroll a set of slides horizontally and, by clicking one, open a stack containing yet more slides. If you then click again on an image in that stack it opens up a lightbox. Clicking on another stack or the close button will close that stack (and open another, as case may be). That all works great. However you get some weird behavior if you do the following: Click to open any stack. Click to open an image's light box (this works best if you click on the image that's level with the main list). Close the light box and the stack either by clicking the close button or clicking on another stack. Click back to the first stack. Instead of reopening the stack, you get the lightbox. This confuses me as the light box should only ever be called if there is a class on the containing UL and that class is removed when the lightbox is closed. I've checked and double-checked this, it's definitely missing. Here are the respective functions: $("ul.hide a.lightbox").live("click",function(){ $("ul.show").removeClass("show").addClass("hide"); $(this).parent().parent().removeClass("hide").addClass("show"); $("ul.hide").animate({opacity: 0.2}); $("ul.show").animate({opacity: 1}); $("#next").animate({opacity: 0.2}); $("#prev").animate({opacity: 0.2}); return false; }); $("ul.show a.lightbox").live("click",function(){ $(this).fancybox().trigger("click"); return false; }); As you can see, in order for the lightbox to be called the containing UL has to have the class of show. However, if you check it with Firebug it won't. For those who are curious, the added .trigger("click"); is because the lightbox will require a double-click to launch otherwise. Any idea how I can fix this?

    Read the article

  • how to insert into database from stored procedure in log4net?

    - by Samreen
    I have to log thread context properties like this: string logFilePath = AppDomain.CurrentDomain.BaseDirectory + "log4netconfig.xml"; FileInfo finfo = new FileInfo(logFilePath); log4net.Config.XmlConfigurator.ConfigureAndWatch(finfo); ILog logger = LogManager.GetLogger("Exception.Logging"); log4net.ThreadContext.Properties["MESSAGE"] = exception.Message; log4net.ThreadContext.Properties["MODULE"] = "module1"; log4net.ThreadContext.Properties["COMPONENT"] = "component1"; logger.Debug("test"); and the configuration file is: <configuration> <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler,Log4net"/> </configSections> <log4net> <logger name="Exception.Logging" level="Debug"> <appender-ref ref="AdoNetAppender"/> </logger> <appender name="AdoNetAppender" type="log4net.Appender.AdoNetAppender"> <connectionString value="Data Source=xe;User ID=test;Password=test;" /> <connectionType value="System.Data.OracleClient.OracleConnection, System.Data.OracleClient, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> <bufferSize value="10000"/> <commandText value="Log_Exception_Pkg.Insert_Log" /> <commandType value="StoredProcedure" /> <parameter> <parameterName value="@p_Error_Message" /> <dbType value="String" /> <size value="255" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{MESSAGE}"/> </layout> </parameter> <parameter> <parameterName value="@p_Module" /> <dbType value="String" /> <size value="225" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{MODULE}"/> </layout> </parameter> <parameter> <parameterName value="@p_Component" /> <dbType value="String" /> <size value="225" /> <layout type="log4net.Layout.PatternLayout"> <conversionPattern value="%property{COMPONENT}"/> </layout> </parameter> </appender> </log4net> </configuration> But its not inserting them in the database. How can I get that to work?

    Read the article

  • How to implement a caching model without violating MVC pattern?

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC 3 (Razor) Web Application, with a particular page which is highly database intensive, and user experience is of the upmost priority. Thus, i am introducing caching on this particular page. I'm trying to figure out a way to implement this caching pattern whilst keeping my controller thin, like it currently is without caching: public PartialViewResult GetLocationStuff(SearchPreferences searchPreferences) { var results = _locationService.FindStuffByCriteria(searchPreferences); return PartialView("SearchResults", results); } As you can see, the controller is very thin, as it should be. It doesn't care about how/where it is getting it's info from - that is the job of the service. A couple of notes on the flow of control: Controllers get DI'ed a particular Service, depending on it's area. In this example, this controller get's a LocationService Services call through to an IQueryable<T> Repository and materialize results into T or ICollection<T>. How i want to implement caching: I can't use Output Caching - for a few reasons. First of all, this action method is invoked from the client-side (jQuery/AJAX), via [HttpPost], which according to HTTP standards should not be cached as a request. Secondly, i don't want to cache purely based on the HTTP request arguments - the cache logic is a lot more complicated than that - there is actually two-level caching going on. As i hint to above, i need to use regular data-caching, e.g Cache["somekey"] = someObj;. I don't want to implement a generic caching mechanism where all calls via the service go through the cache first - i only want caching on this particular action method. First thought's would tell me to create another service (which inherits LocationService), and provide the caching workflow there (check cache first, if not there call db, add to cache, return result). That has two problems: The services are basic Class Libraries - no references to anything extra. I would need to add a reference to System.Web here. I would have to access the HTTP Context outside of the web application, which is considered bad practice, not only for testability, but in general - right? I also thought about using the Models folder in the Web Application (which i currently use only for ViewModels), but having a cache service in a models folder just doesn't sound right. So - any ideas? Is there a MVC-specific thing (like Action Filter's, for example) i can use here? General advice/tips would be greatly appreciated.

    Read the article

  • R: Plotting a graph with different colors of points based on advanced criteria

    - by balconydoor
    What I would like to do is a plot (using ggplot), where the x axis represent years which have a different colour for the last three years in the plot than the rest. The last three years should also meet a certain criteria and based on this the last three years can either be red or green. The criteria is that the mean of the last three years should be less (making it green) or more (making it red) than the 66%-percentile of the remaining years. So far I have made two different functions calculating the last three year mean: LYM3 <- function (x) { LYM3 <- tail(x,3) mean(LYM3$Data,na.rm=T) } And the 66%-percentile for the remaining: perc66 <- function(x) { percentile <- head(x,-3) quantile(percentile$Data, .66, names=F,na.rm=T) } Here are two sets of data that can be used in the calculations (plots), the first which is an example from my real data where LYM3(df1) < perc66(df1) and the second is just made up data where LYM3 perc66. df1<- data.frame(Year=c(1979:2010), Data=c(347261.87, 145071.29, 110181.93, 183016.71, 210995.67, 205207.33, 103291.78, 247182.10, 152894.45, 170771.50, 206534.55, 287770.86, 223832.43, 297542.86, 267343.54, 475485.47, 224575.08, 147607.81, 171732.38, 126818.10, 165801.08, 136921.58, 136947.63, 83428.05, 144295.87, 68566.23, 59943.05, 49909.08, 52149.11, 117627.75, 132127.79, 130463.80)) df2 <- data.frame(Year=c(1979:2010), Data=c(sample(50,29,replace=T),75,75,75)) Here’s my code for my plot so far: plot <- ggplot(df1, aes(x=Year, y=Data)) + theme_bw() + geom_point(size=3, aes(colour=ifelse(df1$Year<2008, "black",ifelse(LYM3(df1) < perc66(df1),"green","red")))) + geom_line() + scale_x_continuous(breaks=c(1980,1985,1990,1995,2000,2005,2010), limits=c(1978,2011)) plot As you notice it doesn’t really do what I want it to do. The only thing it does seem to do is that it turns the years before 2008 into one level and those after into another one and base the point colour off these two levels. Since I don’t want this year to be stationary either, I made another tiny function: fun3 <- function(x) { df <- subset(x, Year==(max(Year)-2)) df$Year } So the previous code would have the same effect as: geom_point(size=3, aes(colour=ifelse(df1$Year<fun3(df1), "black","red"))) But it still does not care about my colours. Why does it make the years into levels? And how come an ifelse function doesn’t work within another one in this case? How would it be possible to the arguments to do what I like? I realise this might be a bit messy, asking for a lot at the same time, but I hope my description is pretty clear. It would be helpful if someone could at least point me in the right direction. I tried to put the code for the plot into a function as well so I wouldn’t have to change the data frame at all functions within the plot, but I can’t get it to work. Thank you!

    Read the article

  • SQL Server - Get Inserted Record Identity Value when Using a View's Instead Of Trigger

    - by CuppM
    For several tables that have identity fields, we are implementing a Row Level Security scheme using Views and Instead Of triggers on those views. Here is a simplified example structure: -- Table CREATE TABLE tblItem ( ItemId int identity(1,1) primary key, Name varchar(20) ) go -- View CREATE VIEW vwItem AS SELECT * FROM tblItem -- RLS Filtering Condition go -- Instead Of Insert Trigger CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) SELECT Name FROM inserted; END go If I want to insert a record and get its identity, before implementing the RLS Instead Of trigger, I used: DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = SCOPE_IDENTITY(); With the trigger, SCOPE_IDENTITY() no longer works - it returns NULL. I've seen suggestions for using the OUTPUT clause to get the identity back, but I can't seem to get it to work the way I need it to. If I put the OUTPUT clause on the view insert, nothing is ever entered into it. -- Nothing is added to @ItemIds DECLARE @ItemIds TABLE (ItemId int); INSERT INTO vwItem (Name) OUTPUT INSERTED.ItemId INTO @ItemIds VALUES ('MyName'); If I put the OUTPUT clause in the trigger on the INSERT statement, the trigger returns the table (I can view it from SQL Management Studio). I can't seem to capture it in the calling code; either by using an OUTPUT clause on that call or using a SELECT * FROM (). -- Modified Instead Of Insert Trigger w/ Output CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) OUTPUT INSERTED.ItemId SELECT Name FROM inserted; END go -- Calling Code INSERT INTO vwItem (Name) VALUES ('MyName'); The only thing I can think of is to use the IDENT_CURRENT() function. Since that doesn't operate in the current scope, there's an issue of concurrent users inserting at the same time and messing it up. If the entire operation is wrapped in a transaction, would that prevent the concurrency issue? BEGIN TRANSACTION DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = IDENT_CURRENT('tblItem'); COMMIT TRANSACTION Does anyone have any suggestions on how to do this better? I know people out there who will read this and say "Triggers are EVIL, don't use them!" While I appreciate your convictions, please don't offer that "suggestion".

    Read the article

  • Hibernate - EhCache - Which region to Cache associations/sets/collections ??

    - by lifeisnotfair
    Hi all, I am a newcomer to hibernate. It would be great if someone could comment over the following query that i have: Say i have a parent class and each parent has multiple children. So the mapping file of parent class would be something like: parent.hbm.xml <hibernate-mapping > <class name="org.demo.parent" table="parent" lazy="true"> <cache usage="read-write" region="org.demo.parent"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <set name="children" lazy="true"> <cache usage="read-write" region="org.demo.parent.children" /> <key column="parent_id"/> <one-to-many class="org.demo.children"/> </set> </class> </hibernate-mapping> children.hbm.xml <hibernate-mapping > <class name="org.demo.children" table="children" lazy="true"> <cache usage="read-write" region="org.demo.children"/> <id name="id" column="id" type="integer" length="10"> <generator class="native"> </generator> </id> <property name="name" column="name" type="string" length="50"/> <many-to-one name="parent_id" column="parent_id" type="integer" length="10" not-null="true"/> </class> </hibernate-mapping> So for the set children, should we specify the region org.demo.parent.children where it should cache the association or should we use the cache region of org.demo.children where the children would be getting cached. I am using EHCache as the 2nd level cache provider. I tried to search for the answer to this question but couldnt find any answer in this direction. It makes more sense to use org.demo.children but I dont know in which scenarios one should use a separate cache region for associations/sets/collections as in the above case. Kindly provide your inputs also let me know if I am not clear in my question. Thanks all.

    Read the article

  • emacs: how do I use edebug on code that is defined in a macro?

    - by Cheeso
    I don't even know the proper terminology for this lisp syntax, so I don't know if the words I'm using to ask the question, make sense. But the question makes sense, I'm sure. So let me just show you. cc-mode (cc-fonts.el) has things called "matchers" which are bits of code that run to decide how to fontify a region of code. That sounds simple enough, but the matcher code is in a form I don't completely understand, with babckticks and comma-atsign and just comma and so on, and furthermore it is embedded in a c-lang-defcost, which itself is a macro. And I want to run edebug on that code. Look: (c-lang-defconst c-basic-matchers-after "Font lock matchers for various things that should be fontified after generic casts and declarations are fontified. Used on level 2 and higher." t `(;; Fontify the identifiers inside enum lists. (The enum type ;; name is handled by `c-simple-decl-matchers' or ;; `c-complex-decl-matchers' below. ,@(when (c-lang-const c-brace-id-list-kwds) `((,(c-make-font-lock-search-function (concat "\\<\\(" (c-make-keywords-re nil (c-lang-const c-brace-id-list-kwds)) "\\)\\>" ;; Disallow various common punctuation chars that can't come ;; before the '{' of the enum list, to avoid searching too far. "[^\]\[{}();,/#=]*" "{") '((c-font-lock-declarators limit t nil) (save-match-data (goto-char (match-end 0)) (c-put-char-property (1- (point)) 'c-type 'c-decl-id-start) (c-forward-syntactic-ws)) (goto-char (match-end 0))))))) I am reading up on lisp syntax to figure out what those things are and what to call them, but aside from that, how can I run edebug on the code that follows the comment that reads ;; Fontify the identifiers inside enum lists. ? I know how to run edebug on a defun - just invoke edebug-defun within the function's definition, and off I go. Is there a corresponding thing I need to do to edebug the cc-mode matcher code forms?

    Read the article

  • Security review of an authenticated Diffie Hellman variant

    - by mtraut
    EDIT I'm still hoping for some advice on this, i tried to clarify my intentions... When i came upon device pairing in my mobile communication framework i studied a lot of papers on this topic and and also got some input from previous questions here. But, i didn't find a ready to implement protocol solution - so i invented a derivate and as i'm no crypto geek i'm not sure about the security caveats of the final solution: The main questions are Is SHA256 sufficient as a commit function? Is the addition of the shared secret as an authentication info in the commit string safe? What is the overall security of the 1024 bit group DH I assume at most 2^-24 bit probability of succesful MITM attack (because of 24 bit challenge). Is this plausible? What may be the most promising attack (besides ripping the device out off my numb, cold hands) This is the algorithm sketch For first time pairing, a solution proposed in "Key agreement in peer-to-peer wireless networks" (DH-SC) is implemented. I based it on a commitment derived from: A fix "UUID" for the communicating entity/role (128 bit, sent at protocol start, before commitment) The public DH key (192 bit private key, based on the 1024 bit Oakley group) A 24 bit random challenge Commit is computed using SHA256 c = sha256( UUID || DH pub || Chall) Both parties exchange this commitment, open and transfer the plain content of the above values. The 24 bit random is displayed to the user for manual authentication DH session key (128 bytes, see above) is computed When the user opts for persistent pairing, the session key is stored with the remote UUID as a shared secret Next time devices connect, commit is computed by additionally hashing the previous DH session key before the random challenge. For sure it is not transfered when opening. c = sha256( UUID || DH pub || DH sess || Chall) Now the user is not bothered authenticating when the local party can derive the same commitment using his own, stored previous DH session key. After succesful connection the new DH session key becomes the new shared secret. As this does not exactly fit the protocols i found so far (and as such their security proofs), i'd be very interested to get an opinion from some more crypto enabled guys here. BTW. i did read about the "EKE" protocol, but i'm not sure what the extra security level is.

    Read the article

  • Minutia on Objective-C Categories and Extensions.

    - by Matt Wilding
    I learned something new while trying to figure out why my readwrite property declared in a private Category wasn't generating a setter. It was because my Category was named: // .m @interface MyClass (private) @property (readwrite, copy) NSArray* myProperty; @end Changing it to: // .m @interface MyClass () @property (readwrite, copy) NSArray* myProperty; @end and my setter is synthesized. I now know that Class Extension is not just another name for an anonymous Category. Leaving a Category unnamed causes it to morph into a different beast: one that now gives compile-time method implementation enforcement and allows you to add ivars. I now understand the general philosophies underlying each of these: Categories are generally used to add methods to any class at runtime, and Class Extensions are generally used to enforce private API implementation and add ivars. I accept this. But there are trifles that confuse me. First, at a hight level: Why differentiate like this? These concepts seem like similar ideas that can't decide if they are the same, or different concepts. If they are the same, I would expect the exact same things to be possible using a Category with no name as is with a named Category (which they are not). If they are different, (which they are) I would expect a greater syntactical disparity between the two. It seems odd to say, "Oh, by the way, to implement a Class Extension, just write a Category, but leave out the name. It magically changes." Second, on the topic of compile time enforcement: If you can't add properties in a named Category, why does doing so convince the compiler that you did just that? To clarify, I'll illustrate with my example. I can declare a readonly property in the header file: // .h @interface MyClass : NSObject @property (readonly, copy) NSString* myString; @end Now, I want to head over to the implementation file and give myself private readwrite access to the property. If I do it correctly: // .m @interface MyClass () @property (readonly, copy) NSString* myString; @end I get a warning when I don't synthesize, and when I do, I can set the property and everything is peachy. But, frustratingly, if I happen to be slightly misguided about the difference between Category and Class Extension and I try: // .m @interface MyClass (private) @property (readonly, copy) NSString* myString; @end The compiler is completely pacified into thinking that the property is readwrite. I get no warning, and not even the nice compile error "Object cannot be set - either readonly property or no setter found" upon setting myString that I would had I not declared the readwrite property in the Category. I just get the "Does not respond to selector" exception at runtime. If adding ivars and properties is not supported by (named) Categories, is it too much to ask that the compiler play by the same rules? Am I missing some grand design philosophy?

    Read the article

  • WPF: Binding to ListBoxItem.IsSelected doesn't work for off-screen items

    - by Qwertie
    In my program I have a set of view-model objects to represent items in a ListBox (multi-select is allowed). The viewmodel has an IsSelected property that I would like to bind to the ListBox so that selection state is managed in the viewmodel rather than in the listbox itself. However, apparently the ListBox doesn't maintain bindings for most of the off-screen items, so in general the IsSelected property is not synchronized correctly. Here is some code that demonstrates the problem. First XAML: <StackPanel> <StackPanel Orientation="Horizontal"> <TextBlock>Number of selected items: </TextBlock> <TextBlock Text="{Binding NumItemsSelected}"/> </StackPanel> <ListBox ItemsSource="{Binding Items}" Height="200" SelectionMode="Extended"> <ListBox.ItemContainerStyle> <Style TargetType="{x:Type ListBoxItem}"> <Setter Property="IsSelected" Value="{Binding IsSelected}"/> </Style> </ListBox.ItemContainerStyle> </ListBox> <Button Name="TestSelectAll" Click="TestSelectAll_Click">Select all</Button> </StackPanel> C# Select All handler: private void TestSelectAll_Click(object sender, RoutedEventArgs e) { foreach (var item in _dataContext.Items) item.IsSelected = true; } C# viewmodel: public class TestItem : NPCHelper { TestDataContext _c; string _text; public TestItem(TestDataContext c, string text) { _c = c; _text = text; } public override string ToString() { return _text; } bool _isSelected; public bool IsSelected { get { return _isSelected; } set { _isSelected = value; FirePropertyChanged("IsSelected"); _c.FirePropertyChanged("NumItemsSelected"); } } } public class TestDataContext : NPCHelper { public TestDataContext() { for (int i = 0; i < 200; i++) _items.Add(new TestItem(this, i.ToString())); } ObservableCollection<TestItem> _items = new ObservableCollection<TestItem>(); public ObservableCollection<TestItem> Items { get { return _items; } } public int NumItemsSelected { get { return _items.Where(it => it.IsSelected).Count(); } } } public class NPCHelper : INotifyPropertyChanged { public event PropertyChangedEventHandler PropertyChanged; public void FirePropertyChanged(string prop) { if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs(prop)); } } Two separate problems can be observed. If you click the first item and then press Shift+End, all 200 items should be selected; however, the heading reports that only 21 items are selected. If you click "Select all" then all items are indeed selected. If you then click an item in the ListBox you would expect the other 199 items to be deselected, but this does not happen. Instead, only the items that are on the screen (and a few others) are deselected. All 199 items will not be deselected unless you first scroll through the list from beginning to end (and even then, oddly enough, it doesn't work if you perform scrolling with the little scroll box). My questions are: Can someone explain precisely why this occurs? Can I avoid or work around the problem?

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • Why do I get an extra newline in the middle of a UTF-8 character with XML::Parser?

    - by René Nyffenegger
    I encountered a problem dealing with UTF-8, XML and Perl. The following is the smallest piece of code and data in order to reproduce the problem. Here's an XML file that needs to be parsed: <?xml version="1.0" encoding="utf-8"?> <test> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> [<words> .... </words> 148 times repeated] <words>???????????? ??????? ????????? ???? ???????????? ??????</words> <words>???????????? ??????? ????????? ???? ???????????? ??????</words> </test> The parsing is done with this perl script: use warnings; use strict; use XML::Parser; use Data::Dump; my $in_words = 0; my $xml_parser=new XML::Parser(Style=>'Stream'); $xml_parser->setHandlers ( Start => \&start_element, End => \&end_element, Char => \&character_data, Default => \&default); open OUT, '>out.txt'; binmode (OUT, ":utf8"); open XML, 'xml_test.xml' or die; $xml_parser->parse(*XML); close XML; close OUT; sub start_element { my($parseinst, $element, %attributes) = @_; if ($element eq 'words') { $in_words = 1; } else { $in_words = 0; } } sub end_element { my($parseinst, $element, %attributes) = @_; if ($element eq 'words') { $in_words = 0; } } sub default { # nothing to see here; } sub character_data { my($parseinst, $data) = @_; if ($in_words) { if ($in_words) { print OUT "$data\n"; } } } When the script is run, it produces the out.txt file. The problem is in this file on line 147. The 22th character (which in utf-8 consists of \xd6 \xb8) is split between the d6 and b8 with a new line. This should not happen. Now, I am interested if someone else has this problem or can reproduce it. And why I am getting this problem. I am running this script on Windows: C:\temp>perl -v This is perl, v5.10.0 built for MSWin32-x86-multi-thread (with 5 registered patches, see perl -V for more detail) Copyright 1987-2007, Larry Wall Binary build 1003 [285500] provided by ActiveState http://www.ActiveState.com Built May 13 2008 16:52:49

    Read the article

  • PHP MySQL Zend-ACL - Find all inherited items (Children / Parents)

    - by Scoobler
    I have one MySQL DB table like the following, the resources table: id | name | type 1 | guest | user 2 | member | user 3 | moderator | user 4 | owner | user 5 | admin | user 6 | index | controller Onto the next table, the rules table: id | user_id | rule | resource_id | extras 1 | 2 | 3 | 1 | null 2 | 3 | 3 | 2 | null 3 | 4 | 3 | 3 | null 4 | 5 | 3 | 4 | null 5 | 6 | 1 | 1 | index,login,register 6 | 6 | 2 | 2 | login,register 7 | 6 | 1 | 2 | logout OK, sorry for the length, but I am trying to give a full picture of what I am trying to do. So the way it works, a role (aka user) can be granted (rule: 1) access to a controller, a role can inherit (rule: 3) access from another role or a role and be denied (rule: 2) access to a controller. (A user is a resource and a controller is a resource) Access to actions are granted / denied using the extras column. This all works, its not a problem with setting up the ACL within zend. What I am now trying to do is show the relationships; to do that I need to find the lowest level a role is granted access to a controller stopping if it has explicitly been removed. I plan on listing the roles. When I click a role, I want it to show all the controllers that role has access to. Then clicking on a controller shows the actions the role is allowed to do. So in the example above, a guest is allowed to view the index action of the index controller along with the login action. A member inherits the same access, but is then denied access to the login action and register action. A moderator inherits the rules of a member. So if I were to select the role moderator. I want to see the controller index listed. If I click on the controller, it should show the allowed actions as being action: index. (which was originally granted to the guest, but hasn't since been dissallowed) Is there any examples to doing this. I am obviously working with the Zend MVC (PHP) and MySQL. Even just a persudo code example would be a helpful starting point - this is one of the last parts of the jigsaw I am putting together. P.S. Obviously I have the ACL object - is it going to be easier to interigate that or is it better to do it my self via PHP/MySQL? The aim will be, show what a role can access which will then allow me to add or edit a role, controller and action in a GUI style (that is somewhat the easy bit) - currently I am updating the DB manually as I have been building the site.

    Read the article

  • NSNotifications vs delegate for multiple instances of same protocol

    - by Brent Traut
    I could use some architectural advice. I've run into the following problem a few times now and I've never found a truly elegant way to solve it. The issue, described at the highest level possible:I have a parent class that would like to act as the delegate for multiple children (all using the same protocol), but when the children call methods on the parent, the parent no longer knows which child is making the call. I would like to use loose coupling (delegates/protocols or notifications) rather than direct calls. I don't need multiple handlers, so notifications seem like they might be overkill. To illustrate the problem, let me try a super-simplified example: I start with a parent view controller (and corresponding view). I create three child views and insert each of them into the parent view. I would like the parent view controller to be notified whenever the user touches one of the children. There are a few options to notify the parent: Define a protocol. The parent implements the protocol and sets itself as the delegate to each of the children. When the user touches a child view, its view controller calls its delegate (the parent). In this case, the parent is notified that a view is touched, but it doesn't know which one. Not good enough. Same as #1, but define the methods in the protocol to also pass some sort of identifier. When the child tells its delegate that it was touched, it also passes a pointer to itself. This way, the parent know exactly which view was touched. It just seems really strange for an object to pass a reference to itself. Use NSNotifications. The parent defines a separate method for each of the three children and then subscribes to the "viewWasTouched" notification for each of the three children as the notification sender. The children don't need to attach themselves to the user dictionary, but they do need to send the notification with a pointer to themselves as the scope. Same as #4, but rather than using separate methods, the parent could just use one with a switch case or other branching along with the notification's sender to determine which path to take. Create multiple man-in-the-middle classes that act as the delegates to the child views and then call methods on the parent either with a pointer to the child or with some other differentiating factor. This approach doesn't seem scalable. Are any of these approaches considered best practice? I can't say for sure, but it feels like I'm missing something more obvious/elegant.

    Read the article

  • Using Selenium-IDE with a rich Javascript application?

    - by Darien
    Problem At my workplace, we're trying to find the best way to create automated-tests for an almost wholly javascript-driven intranet application. Right now we're stuck trying to find a good tradeoff between: Application code in reusable and nest-able GUI components. Tests which are easily created by the testing team Tests which can be recorded once and then automated Tests which do not break after small cosmetic changes to the site XPath expressions (or other possible expressions, like jQuery selectors) naively generated from Selenium-IDE are often non-repeatable and very fragile. Conversely, having the JS code generate special unique ID values for every important DOM-element on the page... well, that is its own headache, complicated by re-usable GUI components and IDs needing to be consistent when the test is re-run. What successes have other people had with this kind of thing? How do you do automated application-level testing of a rich JS interface? Limitations We are using JavascriptMVC 2.0, hopefully 3.0 soon so that we can upgrade to jQuery 1.4.x. The test-making folks are mostly trained to use Selenium IDE to directly record things. The test leads would prefer a page-unique HTML ID on each clickable element on the page... Training the testers to write or alter special expressions (such as telling them which HTML class-names are important branching points) is a no-go. We try to make re-usable javascript components, but this means very few GUI components can treat themselves (or what they contain) as unique. Some of our components already use HTML ID values in their operation. I'd like to avoid doing this anyway, but it complicates the idea of ID-based testing. It may be possible to add custom facilities (like a locator-builder or new locator method) to the Selenium-IDE installation testers use. Almost everything that goes on occurs within a single "page load" from a conventional browser perspective, even when items are saved Current thoughts I'm considering a system where a custom locator-builder (javascript code) for Selenium-IDE will talk with our application code as the tester is recording. In this way, our application becomes partially responsible for generating a mostly-flexible expression (XPath or jQuery) for any given DOM element. While this can avoid requiring more training for testers, I worry it may be over-thinking things.

    Read the article

  • to write log files in two different files

    - by Sun
    my application run on customized client framework,the client framework used log4net to log their own log files. we are(our application) has to use the same log4net to log our log files in our own path(say our customized path). currently the our log files are created but log are not writing in that file.it is writting in the client framework log file. searched lot of sites the link http://stackoverflow.com/questions/308436/log4net-programmatcially-specify-multiple-loggers-with-multiple-file-appenders helped me to configure the log4net config programatically, still im log statemets are not written in my log file.the code used as below public class TraceLog { private string message = string.Empty; private static ILog ILogger = null; private static TraceLog instance = new TraceLog(); private TraceLog() { SetLevel("Log4net.MainForm", "ALL"); AddAppender("Log4net.MainForm", CreateFileAppender("FileAppender", "C:\\mylog.log")); } public static TraceLog Instance { get { return instance; } } public void Debug(string logMessage) { message = PrepareLog(logMessage); ILogger.Debug(message); } protected string PrepareLog(string logMessage) { string message = GetFileMethodLineNumberInfo(); message += logMessage; return message; } protected string GetFileMethodLineNumberInfo() { StackTrace stackTrace = new StackTrace(true); // The position 3 is relative to the index of the specified method StackFrame stackFrame = stackTrace.GetFrame(3); return (stackFrame.GetMethod().DeclaringType.Name + "/" + stackFrame.GetMethod().Name + "/" + stackFrame.GetFileLineNumber() + ":"); } private static void SetLevel(string loggerName, string levelName) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.Level = l.Hierarchy.LevelMap[levelName]; } private static void AddAppender(string loggerName, IAppender appender) { ILogger = LogManager.GetLogger(loggerName); log4net.Repository.Hierarchy.Logger l = (log4net.Repository.Hierarchy.Logger)ILogger.Logger; l.AddAppender(appender); } private static IAppender CreateFileAppender(string name, string fileName) { FileAppender appender = new FileAppender(); appender.Name = name; appender.File = fileName; appender.AppendToFile = true; //PatternLayout layout = new PatternLayout(); //layout.ConversionPattern = "%d [%t] %-5p %c [%x] - %m%n"; //layout.ActivateOptions(); //appender.Layout = layout; appender.ActivateOptions(); return appender; } } } anyone pls help how to solve this

    Read the article

  • I need to copy only selected files and folders in PHP

    - by OM The Eternity
    I am using the following code, in which initially i am taking the difference of two folder structure and then the out put needs to be copied to other folder. here is the code below.. $source = '/var/www/html/copy1'; $mirror = '/var/www/html/copy2'; function scan_dir_recursive($dir, $rel = null) { $all_paths = array(); $new_paths = scandir($dir); foreach ($new_paths as $path) { if ($path == '.' || $path == '..') { continue; } if ($rel === null) { $path_with_rel = $path; } else { $path_with_rel = $rel . DIRECTORY_SEPARATOR . $path; } $full_path = $dir . DIRECTORY_SEPARATOR . $path; $all_paths[] = $path_with_rel; if (is_dir($full_path)) { $all_paths = array_merge( $all_paths, scan_dir_recursive($full_path, $path_with_rel) ); } } return $all_paths; } $diff_paths = array_diff( scan_dir_recursive($mirror), scan_dir_recursive($source) ); /*$diff_path = array_diff($mirror,$original);*/ echo "<pre>Difference ";print_r($diff_paths); foreach($diff_paths as $path) { echo $source1 = "var/www/html/copy2/".$path; echo "<br>"; $des = "var/www/html/copy1/".$path; copy_recursive_dirs($source1, $des); } function copy_recursive_dirs($dirsource, $dirdest) { $dir_handle=opendir($dirsource); mkdir($dirdest,0777); while(false!==($file=readdir($dir_handle))) {/*echo "<pre>"; print_r($file);*/ if($file!="." && $file!="..") { if(is_dir($dirsource.DIRECTORY_SEPARATOR.$file)) { //Copy the file at the same level in the destination folder copy_recursive_dirs($dirsource.DIRECTORY_SEPARATOR.$file, $dirdest.DIRECTORY_SEPARATOR.$file); } else{ //Copy the dir at the same lavel in the destination folder copy ($dirsource.DIRECTORY_SEPARATOR.$file, $dirdest.DIRECTORY_SEPARATOR.$file); } } } closedir($dir_handle); return true; } Whenever I execute the script I get the difference output but do not get the other copy on second folder as per code... Pls help me in rectifying...

    Read the article

  • Play! Framework - Can my view template be localised when rendering it as an AsyncResult?

    - by avik
    I've recently started using the Play! framework (v2.0.4) for writing a Java web application. In the majority of my controllers I'm following the paradigm of suspending the HTTP request until the promise of a web service response has been fulfilled. Once the promise has been fulfilled, I return an AsyncResult. This is what most of my actions look like (with a bunch of code omitted): public static Result myActionMethod() { Promise<MyWSResponse> wsResponse; // Perform a web service call that will return the promise of a MyWSResponse... return async(wsResponse.map(new Function<MyWSResponse, Result>() { @Override public Result apply(MyWSResponse response) { // Validate response... return ok(myScalaViewTemplate.render(response.data())); } })); } I'm now trying to internationalise my app, but hit the following error when I try to render a template from an async method: [error] play - Waiting for a promise, but got an error: There is no HTTP Context available from here. java.lang.RuntimeException: There is no HTTP Context available from here. at play.mvc.Http$Context.current(Http.java:27) ~[play_2.9.1.jar:2.0.4] at play.mvc.Http$Context$Implicit.lang(Http.java:124) ~[play_2.9.1.jar:2.0.4] at play.i18n.Messages.get(Messages.java:38) ~[play_2.9.1.jar:2.0.4] at views.html.myScalaViewTemplate$.apply(myScalaViewTemplate.template.scala:40) ~[classes/:na] at views.html.myScalaViewTemplate$.render(myScalaViewTemplate.template.scala:87) ~[classes/:na] at views.html.myScalaViewTemplate.render(myScalaViewTemplate.template.scala) ~[classes/:na] In short, where I've got a message bundle lookup in my view template, some Play! code is attempting to access the original HTTP request and retrieve the accept-languages header, in order to know which message bundle to use. But it seems that the HTTP request is inaccessible from the async method. I can see a couple of (unsatisfactory) ways to work around this: Go back to the 'one thread per request' paradigm and have threads block waiting for responses. Figure out which language to use at Controller level, and feed that choice into my template. I also suspect this might not be an issue on trunk. I know that there is a similar issue in 2.0.4 with regards to not being able to access or modify the Session object which has recently been fixed. However I'm stuck on 2.0.4 for the time being, so is there a better way that I can resolve this problem?

    Read the article

  • CodeIgniter Error Log Info + Errors

    - by fatnjazzy
    Hi, IS there a way to save in the log, Info + Errors without debug? Howcome debug level apears with info? If i want to log info "Account id 4345 was deleted by Admin", why do i need to see all of these: DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Hooks Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 URI Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Router Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Output Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Input Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Global POST and COOKIE data sanitized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Language Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Loader Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config file loaded: config/safe_charge.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config file loaded: config/web_fx.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: loadutils_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: objectsutils_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: logutils_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Helper loaded: password_helper DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Database Driver Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 cURL Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Language Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Config Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Account MX_Controller Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/pending_account_model.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/proccess_accounts_model.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/web_fx_model.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/trader_account_type_spreads.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 File loaded: ./modules/accounts/models/trader_accounts.php DEBUG - 2010-12-27 08:39:13 --> 192.168.200.32 Model Class Initialized Thanks

    Read the article

  • How should I ethically approach user password storage for later plaintext retrieval?

    - by Shane
    As I continue to build more and more websites and web applications I am often asked to store user's passwords in a way that they can be retrieved if/when the user has an issue (either to email a forgotten password link, walk them through over the phone, etc.) When I can I fight bitterly against this practice and I do a lot of ‘extra’ programming to make password resets and administrative assistance possible without storing their actual password. When I can’t fight it (or can’t win) then I always encode the password in some way so that it at least isn’t stored as plaintext in the database—though I am aware that if my DB gets hacked that it won’t take much for the culprit to crack the passwords as well—so that makes me uncomfortable. In a perfect world folks would update passwords frequently and not duplicate them across many different sites—unfortunately I know MANY people that have the same work/home/email/bank password, and have even freely given it to me when they need assistance. I don’t want to be the one responsible for their financial demise if my DB security procedures fail for some reason. Morally and ethically I feel responsible for protecting what can be, for some users, their livelihood even if they are treating it with much less respect. I am certain that there are many avenues to approach and arguments to be made for salting hashes and different encoding options, but is there a single ‘best practice’ when you have to store them? In almost all cases I am using PHP and MySQL if that makes any difference in the way I should handle the specifics. Additional Information for Bounty I want to clarify that I know this is not something you want to have to do and that in most cases refusal to do so is best. I am, however, not looking for a lecture on the merits of taking this approach I am looking for the best steps to take if you do take this approach. In a note below I made the point that websites geared largely toward the elderly, mentally challenged, or very young can become confusing for people when they are asked to perform a secure password recovery routine. Though we may find it simple and mundane in those cases some users need the extra assistance of either having a service tech help them into the system or having it emailed/displayed directly to them. In such systems the attrition rate from these demographics could hobble the application if users were not given this level of access assistance, so please answer with such a setup in mind. Thanks to Everyone This has been a fun questions with lots of debate and I have enjoyed it. In the end I selected an answer that both retains password security (I will not have to keep plain text or recoverable passwords), but also makes it possible for the user base I specified to log into a system without the major drawbacks I have found from normal password recovery. As always there were about 5 answers that I would like to have marked correct for different reasons, but I had to choose the best one--all the rest got a +1. Thanks everyone!

    Read the article

  • Loosely coupled implicit conversion

    - by ltjax
    Implicit conversion can be really useful when types are semantically equivalent. For example, imagine two libraries that implement a type identically, but in different namespaces. Or just a type that is mostly identical, except for some semantic-sugar here and there. Now you cannot pass one type into a function (in one of those libraries) that was designed to use the other, unless that function is a template. If it's not, you have to somehow convert one type into the other. This should be trivial (or otherwise the types are not so identical after-all!) but calling the conversion explicitly bloats your code with mostly meaningless function-calls. While such conversion functions might actually copy some values around, they essentially do nothing from a high-level "programmers" point-of-view. Implicit conversion constructors and operators could obviously help, but they introduce coupling, so that one of those types has to know about the other. Usually, at least when dealing with libraries, that is not the case, because the presence of one of those types makes the other one redundant. Also, you cannot always change libraries. Now I see two options on how to make implicit conversion work in user-code: The first would be to provide a proxy-type, that implements conversion-operators and conversion-constructors (and assignments) for all the involved types, and always use that. The second requires a minimal change to the libraries, but allows great flexibility: Add a conversion-constructor for each involved type that can be externally optionally enabled. For example, for a type A add a constructor: template <class T> A( const T& src, typename boost::enable_if<conversion_enabled<T,A>>::type* ignore=0 ) { *this = convert(src); } and a template template <class X, class Y> struct conversion_enabled : public boost::mpl::false_ {}; that disables the implicit conversion by default. Then to enable conversion between two types, specialize the template: template <> struct conversion_enabled<OtherA, A> : public boost::mpl::true_ {}; and implement a convert function that can be found through ADL. I would personally prefer to use the second variant, unless there are strong arguments against it. Now to the actual question(s): What's the preferred way to associate types for implicit conversion? Are my suggestions good ideas? Are there any downsides to either approach? Is allowing conversions like that dangerous? Should library implementers in-general supply the second method when it's likely that their type will be replicated in software they are most likely beeing used with (I'm thinking of 3d-rendering middle-ware here, where most of those packages implement a 3D vector).

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >