Search Results

Search found 9403 results on 377 pages for 'high tech'.

Page 315/377 | < Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >

  • chrome extension login security with iframe

    - by Weaver
    I should note, I'm not a chrome extension expert. However, I'm looking for some advice or high level solution to a security concern I have with my chrome extension. I've searched quite a bit but can't seem to find a concrete answer. The situation I have a chrome extension that needs to have the user login to our backend server. However, it was decided for design reasons that the default chrome popup balloon was undesirable. Thus I've used a modal dialog and jquery to make a styled popup that is injected with content scripts. Hence, the popup is injected into the DOM o the page you are visiting. The Problem Everything works, however now that I need to implement login functionality I've noticed a vulnerability: If the site we've injected our popup into knows the password fields ID they could run a script to continuously monitor the password and username field and store that data. Call me paranoid, but I see it as a risk. In fact,I wrote a mockup attack site that can correctly pull the user and password when entered into the given fields. My devised solution I took a look at some other chrome extensions, like Buffer, and noticed what they do is load their popup from their website and, instead, embed an iFrame which contains the popup in it. The popup would interact with the server inside the iframe. My understanding is iframes are subject to same-origin scripting policies as other websites, but I may be mistaken. As such, would doing the same thing be secure? TLDR To simplify, if I embedded an https login form from our server into a given DOM, via a chrome extension, are there security concerns to password sniffing? If this is not the best way to deal with chrome extension logins, do you have suggestions with what is? Perhaps there is a way to declare text fields that javascript can simply not interact with? Not too sure! Thank you so much for your time! I will happily clarify anything required.

    Read the article

  • WPF abnormal CPU usage for animation

    - by 0xDEAD BEEF
    HI! I am developing WPF application and client reports extreamly high CPU usage (90%) (whereas i am unable to repeat that behavior). I have traced bootleneck down to these lines. It is simple glowing animation for small single led control (blinking led). What could be reason for this simple annimation taking up SO huge CPU resources? <Trigger Property="State"> <Trigger.Value> <local:BlinkingLedStatus>Blinking</local:BlinkingLedStatus> </Trigger.Value> <Trigger.EnterActions> <BeginStoryboard Name="beginStoryBoard"> <Storyboard> <DoubleAnimation Storyboard.TargetName="glow" Storyboard.TargetProperty="Opacity" AutoReverse="True" From="0.0" To="1.0" Duration="0:0:0.5" RepeatBehavior="Forever"/> </Storyboard> </BeginStoryboard> </Trigger.EnterActions> <Trigger.ExitActions> <StopStoryboard BeginStoryboardName="beginStoryBoard"/> </Trigger.ExitActions> </Trigger>

    Read the article

  • Is there a library that can decompile a method into an Expression tree, with support for CLR 4.0?

    - by Daniel Earwicker
    Previous questions have asked if it is possible to turn compiled delegates into expression trees, for example: http://stackoverflow.com/questions/767733/converting-a-net-funct-to-a-net-expressionfunct The sane answers at the time were: It's possible, but very hard and there's no standard library solution. Use Reflector! But fortunately there are some greatly-insane/insanely-great people out there who like reverse engineering things, and they make difficult things easy for the rest of us. Clearly it is possible to decompile IL to C#, as Reflector does it, and so you could in principle instead target CLR 4.0 expression trees with support for all statement types. This is interesting because it wouldn't matter if the compiler's built-in special support for Expression<> lambdas is never extended to support building statement expression trees in the compiler. A library solution could fill the gap. We would then have a high-level starting point for writing aspect-like manipulations of code without having to mess with raw IL. As noted in the answers to the above linked question, there are some promising signs but I haven't succeeded in finding if there's been much progress since by searching. So has anyone finished this job, or got very far with it? Note: CLR 4.0 is now released. Time for another look-see.

    Read the article

  • How to nest shapes in a DSL Tools diagram?

    - by Paul Lalonde
    I have a DSL containing two main domain classes: Area and Entity. Areas are represented visually by a GeometryShape, whereas entities are represented by a CompartmentShape. Entities can be embedded in an Area, or not (in this case they are embedded in the root object, which is a kind of Area). There may be relationships between entities, including between entities in different areas. Areas cannot be embedded inside of other areas, nor entities embedded inside of other entities. My problem is that I cannot get the behavior I want from the diagram. The embedding of entities in areas works perfectly well at the model level, but the visual representation behaves erratically. For example, if I drag an entity that was created in an area outside of that area, it no longer responds to mouse clicks (I have code that performs the re-parenting, but somehow the diagram side of things is broken). I have searched high and low for samples of how to do this, and come up empty. Every example I've found on the web simulates nesting via "references" relationships, whereas I am performing true embedding of the domain classes (and therefore of their associated shape classes). Does anyone have an example of how to do this? While I'm venting, am I the only one who thinks the diagram/shape classes are massively under-documented?

    Read the article

  • Creating/Maintaining a large project-agnostic code library

    - by bufferz
    In order to reduce repetition and streamline testing/debugging, I'm trying to find the best way to develop a group of libraries that many projects can utilize. I'd like to keep individual executable relatively small, and have shared libraries for math, database, collections, graphics, etc. that were previously scattered among several projects and in many cases duplicated (bad!). This library is to be in an SVN repo and several programmers will be working on it. This library will be in constant development along with the executables that utilize it. For example, I want a code file in ProjectA to look something like the following: using MyCompany.Math.2D; //static 2D math methods using MyCompany.Math.3D; //static #D math methods using MyCompany.Comms.SQL; //static methods for doing simple SQLDB I/O using MyCompany.Graphics.BitmapOperations; //static methods that play with bitmaps So in my ProjectA solution file in VisualStudio, in order to develop/debug the MyCompany library I have to add several projects (Math, Comms, Graphics). Things get pretty cluttered and Solution files get out of date quickly between programmer SVN commits. I'm just looking for a high level approach to maintaining a large, shared code base in an SCN repository. I am fully willing to radically redesign my approach. I'm looking for that warm fuzzy feeling you get when you're design approach is spot on and development is fluid and natural. And ideas? Thanks!!

    Read the article

  • A map and set which uses contiguous memory and has a reserve function

    - by edA-qa mort-ora-y
    I use several maps and sets. The lack of contiguous memory, and high number of (de)allocations, is a performance bottleneck. I need a mainly STL-compatbile map and set class which can use a contiguous block of memory for internal objects (or multiple blocks). It also needs to have a reserve function so that I can preallocate for expected sizes. Before I write my own I'd like to check what is available first. Is there something in Boost which does this? Does somebody know of an available implementation elsewhere? Intrusive collection types are not usable here as the same objects need to exist in several collections. As far as I know STL memory pools are per-type, not per instance. These global pools are not efficient with respect to memory locality in mutli-cpu/core processing. Object pools don't work as the types will be shared between instance but their pool should not. In many cases a hash map may be an option in some cases.

    Read the article

  • "É" not getting converted to two bytes correctly.

    - by ChrisF
    Further to this question I've got a supplementary problem. I've found a track with an "É" in the title. My code: var playList = new StreamWriter(playlist, false, Encoding.UTF8); - private static void WriteUTF8(StreamWriter playList, string output) { byte[] byteArray = Encoding.UTF8.GetBytes(output); foreach (byte b in byteArray) { playList.Write(Convert.ToChar(b)); } } converts this to the following bytes: 195 137 which is being output as à followed by a square (which is an character that can't be printed in the current font). I've exported the same file to a playlist in Media Monkey at it writes the "É" as "É" - which I'm assuming is correct (as KennyTM pointed out). My question is, how do I get the "‰" symbol output? Do I need to select a different font and if so which one? UPDATE People seem to be missing the point. I can get the "É" written to the file using playList.WriteLine("É"); that's not the problem. The problem is that Media Monkey requires the file to be in the following format: #EXTINFUTF8:140,Yann Tiersen - Comptine D'Un Autre Été: L'Après Midi #EXTINF:140,Yann Tiersen - Comptine D'Un Autre Été: L'Après Midi #UTF8:04-Comptine D'Un Autre Été- L'Après Midi.mp3 04-Comptine D'Un Autre Été- L'Après Midi.mp3 Where all the "high-ascii" (for want of a better term) are written out as a pair of characters.

    Read the article

  • ANTLR Tree Grammar and StringTemplate Code Translation

    - by Behrooz Nobakht
    I am working on a code translation project with a sample ANTLR tree grammar as: start: ^(PROGRAM declaration+) -> program_decl_tmpl(); declaration: class_decl | interface_decl; class_decl: ^(CLASS ^(ID CLASS_IDENTIFIER)) -> class_decl_tmpl(cid={$CLASS_IDENTIFIER.text}); The group template file for it looks like: group My; program_decl_tmpl() ::= << *WHAT?* >> class_decl_tmpl(cid) ::= << public class <cid> {} >> Based on this, I have these questions: Everything works fine apart from that what I should express in WHAT? to say that a program is simply a list of class declarations to get the final generated output? Is this approach averagely suitable for not so a high-level language? I have also studied ANTLR Code Translation with String Templates, but it seems that this approach takes much advantage of interleaving code in tree grammar. Is it also possible to do it as much as possible just in String Templates?

    Read the article

  • Matplotlib PDF export uses wrong font

    - by Konrad Rudolph
    I want to generate high-quality diagrams for a presentation. I’m using Python’s matplotlib to generate the graphics. Unfortunately, the PDF export seems to ignore my font settings. I tried setting the font both by passing a FontProperties object to the text drawing functions and by setting the option globally. For the record, here is a MWE to reproduce the problem: import scipy import matplotlib matplotlib.use('cairo') import matplotlib.pylab as pylab import matplotlib.font_manager as fm data = scipy.arange(5) for font in ['Helvetica', 'Gill Sans']: fig = pylab.figure() ax = fig.add_subplot(111) ax.bar(data, data) ax.set_xticks(data) ax.set_xticklabels(data, fontproperties = fm.FontProperties(family = font)) pylab.savefig('foo-%s.pdf' % font) In both cases, the produced output is identical and uses Helvetica (and yes, I do have both fonts installed). Just to be sure, the following doesn’t help either: matplotlib.rc('font', family = 'Gill Sans') Finally, if I replace the backend, instead using the native viewer: matplotlib.use('MacOSX') I do get the correct font displayed – but only in the viewer GUI. The PDF output is once again wrong. To be sure – I can set other fonts – but only other classes of font families: I can set serif fonts or fantasy or monospace. But all sans-serif fonts seem to default to Helvetica.

    Read the article

  • How to Manage CSS Explosion

    - by Jason
    I have been heavily relying on CSS for a website that I am working on (currently, everything is done as property values within each tag on the website and I'm trying to get away from that to make updates significantly easier). The problem I am running into, is I'm starting to get a bit of "CSS explosion" going on. It is becoming difficult for me to decide how to best organize and abstract data within the CSS file. For example: I am using a large number of div tags within the website (previously it was completely tables based). So I'm starting to get a lot of CSS that looks like this... div.title { background-color: Blue; color: White; text-align: center; } div.footer { /* Stuff Here */ } div.body { /* Stuff Here */ } etc. It's not too bad yet, but since I am learning here, I was wondering if recommendations could be made on how best to organize the various parts of a CSS file. What I don't want to get to is where I have a separate CSS attribute for every single thing on my website (which I have seen happen), and I always want the CSS file to be fairly intuitive. (P.S. I do realize this is a very generic, high-level question. My ultimate goal is to make it easy to use the CSS files and demonstrate their power to increase the speed of web development so other individuals that may work on this site in the future will also get into the practice of using them rather than hard-coding values everywhere.)

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime [closed]

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • ASP.NET MVC View ReRenders Part of Itself

    - by Jason
    In all my years of .NET programming I have not run across a bug as weird as this one. I discovered the problem because some elements on the page were getting double-bound by jQuery. After some (ridiculous) debugging, I finally discovered that once the view is completely done rendering itself and all its children partial views, it goes back to an arbitrary yet consistent location and re-renders itself. I have been pulling my hair out about this for two days now and I simply cannot get it to render itself only once! For lack of any better debugging idea, I've painstakingly added logging tracers throughout the HTML just so I can pin down what may be causing this. For instance, this code ($log just logs to the console): ... <script type="text/javascript">var x = 0; $log('1');</script> <div id="new-ad-form"> <script type="text/javascript">x++;$log('1.5', x);</script> ... will yield ... <--- this happens before this snippet 1 1.5 1 ... 10 <--- bottom of my form, after snippet 1.5 2 <--- beginning of part that runs again! ... 9 <--- this happens after this snippet I've searched my codebase high and low, but there is NOTHING that says that it should re-render part of a page. I'm wondering if the jQueryUI has anything to do with it, as #new-ad-form is the container for a jQueryUI dialog box. If this is potentially the case, here's my init code for that: $('#new-ad-form').dialog({ autoOpen: false, modal: true, width: 470, title: 'Create A New Ad', position: ['center', 35], close: AdEditor.reset });

    Read the article

  • Reasonable expectation to support new Operating Systems?

    - by Neil N
    My company has a desktop app originally developed for Windows XP. The original programmer has since been fired (fired with extreme prejudice I might add). I have fixed the app various times but overall try to avoid it, it is a mess and the only real way to fix it is to completely rewrite it, which could take a year. We have been trying to "forget" about this app, and instead steer clients towards our web version, which is more up to date, easier to maintain, easier to extend, and WAY easier to support. Most clients agree, the web version is just better all around. However we have one client that insists on using the desktop app. The app required a little duct tape to get working on Vista, but now completely breaks on Windows 7. I'm not even sure WHAT all the fixes are to get it working on Win7 (the current time estimate stands at "miracle") but after both installing the RELEASE build, and running the DEBUG build from Visual Studio, the app has errors on nearly every user action, and from what I can see from a high level test run, none of them are related. Since Windows 7 did not exist when this app was developed, is my company really expected to make all the required changes to make it function as "smoothly" as it did on XP?

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • Is the Windows dev environment worth the cost?

    - by MCS
    I recently made the move from Linux development to Windows development. And as much of a Linux enthusiast that I am, I have to say - C# is a beautiful language, Visual Studio is terrific, and now that I've bought myself a trackball my wrist has stopped hurting from using the mouse so much. But there's one thing I can't get past: the cost. Windows 7, Visual Studio, SQL Server, Expression Blend, ViEmu, Telerik, MSDN - we're talking thousands for each developer on the project! You're definitely getting something for your money - my question is, is it worth it? [Not every developer needs all the aforementioned tools - but have you ever heard of anyone writing C# code without Visual Studio? I've worked on pretty large software projects in Linux without having to pay for any development tool whatsoever.] Now obviously, if you're already a Windows shop, it doesn't pay to retrain all your developers. And if you're looking to develop a Windows desktop app, you just can't do that in Linux. But if you were starting a new web application project and could hire developers who are experts in whatever languages you want, would you still choose Windows as your development platform despite the high cost? And if yes, why?

    Read the article

  • verilog / systemverilog -- What is the behavior of blocking statements across two always blocks?

    - by miles.sherman
    I am wondering about the behavior of the below code. There are two always blocks, one is combinational to calculate the next_state signal, the other is sequential which will perform some logic and determine whether or not to shutdown the system. It does this by setting the shutdown_now signal high and then calling state <= next_state. My question is if the conditions become true that the shutdown_now signal is set (during clock cycle n) in a blocking manner before the state <= next_state line, will the state during clock cycle n+1 be SHUTDOWN or RUNNING? In other words, does the shutdown_now = 1'b1 line block across both state machines since the state signal is dependent on it through the next_state determination? enum {IDLE, RUNNING, SHUTDOWN} state, next_state; logic shutdown_now; // State machine (combinational) always_comb begin case (state) IDLE: next_state <= RUNNING; RUNNING: next_state <= shutdown ? SHUTDOWN : RUNNING; SHUTDOWN: next_state <= SHUTDOWN; default: next_state <= SHUTDOWN; endcase end // Sequential Behavior always_ff @ (posedge clk) begin // Some code here if (/*some condition) begin shutdown_now = 1'b0; end else begin shutdown_now = 1'b1; end state <= next_state; end

    Read the article

  • What is the correct JNA mapping for UniChar on Mac OS X?

    - by Trejkaz
    I have a C struct like this: struct HFSUniStr255 { UInt16 length; UniChar unicode[255]; }; I have mapped this in the expected way: public class HFSUniStr255 extends Structure { public UInt16 length; // UInt16 is just an IntegerType with length 2 for convenience. public /*UniChar*/ char[] unicode = new char[255]; //public /*UniChar*/ byte[] unicode = new byte[255*2]; //public /*UniChar*/ UInt16[] unicode = new UInt16[255]; public HFSUniStr255() { } public HFSUniStr255(Pointer pointer) { super(pointer); } } If I use this version, I get every second character of the string into my char[] ("aits D" for "Macintosh HD".) I am assuming that this is something to do with being on a 64-bit platform and JNA mapping the value to a 32-bit wchar_t but then chopping off the high 16 bits on each wchar_t on copying them back. If I use the byte[] version, I get data which decodes correctly using the UTF-16LE charset. If I use the UInt16[] version, I get the right code point for each character but it is then inconvenient to convert them back into a string. Is there some way I can define my type as char[], and yet have it convert correctly?

    Read the article

  • Automatically grow document view of NSScrollView using auto layout?

    - by Monolo
    Is there a simple way to get an NSScrollView to adapt to its document view changing size when using autolayout (the Lion feature)? I have tried to call both setNeedsUpdateConstraints: and setNeedsLayout: on the document view, the clip view and the scroll view, without any results. fittingSize of the document view reports the correct size. An NSPopover in conjunction with an NSViewController handles this nicely, with the popover growing and shrinking as needed, and I was hoping to get a similar simple and robust behaviour with the scroll view. I have checked the documentation for scroll views, but they don't seem to be updated to use autolayout. Edited to clarify: The problem I experience is that the document view, which holds subviews, is not re-sized when the subviews change their size, even if they call invalidateIntrinsicContentSize. The contents of the document view are hence clipped to the original size of the document view as they grow. The document view is created in a nib and set as the scroll view's document view in an awakeFromBib method. What I hoped to obtain was that the document view frame would automatically be adjusted to when its fittingSize changes, and the scrollbars updated accordingly. NSPopover does something similar - provided that the subviews of the content controller's view have the constraints set right and various content hugging values are high enough (higher than the hidden popover window's hight constraint priority, for one).

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

  • How to get height for NSAttributedString at a fixed width

    - by bonaldi
    I want to do some drawing of NSAttributedStrings in fixed-width boxes, but am having trouble calculating the right height they'll take up when drawn. So far, I've tried: Calling - (NSSize) size, but the results are useless (for this purpose), as they'll give whatever width the string desires. Calling - (void)drawWithRect:(NSRect)rect options:(NSStringDrawingOptions)options with a rect shaped to the width I want and NSStringDrawingUsesLineFragmentOrigin in the options, exactly as I'm using in my drawing. The results are ... difficult to understand; certainly not what I'm looking for. (As is pointed out in a number of places, including this Cocoa-Dev thread). Creating a temporary NSTextView and doing: [[tmpView textStorage] setAttributedString:aString]; [tmpView setHorizontallyResizable:NO]; [tmpView sizeToFit]; When I query the frame of tmpView, the width is still as desired, and the height is often correct ... until I get to longer strings, when it's often half the size that's required. (There doesn't seem to be a max size being hit: one frame will be 273.0 high (about 300 too short), the other will be 478.0 (only 60-ish too short)). I'd appreciate any pointers, if anyone else has managed this.

    Read the article

  • Caching page by parts; how to pass variables calculated in cached parts into never-cached parts?

    - by Kirzilla
    Hello, Let's imagine that I have a code like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); ob_start(); echo 'Here is the cached item id: '.$item_id; $data = ob_get_contents(); ob_end_clean(); $cache->save($data, "part1_cache_id"); } echo $data; echo never_cache_function($item_id); if (!$data_2 = $cache->load("part2_cache_id")) { ob_start(); echo 'Here is the another cached part of the page...'; $data_2 = ob_get_contents(); ob_end_clean(); $cache->save("part2_cache_id"); } echo $data_2; As far as you can see I need to pass $item_id variable into never_cache_function, but if fist part is cached (part1_cache_id) then I have no way to get $item_id value. I see the only solution - serialize all data from fist part (including $item_id value); then cache serialized string and unserialize it everytime when script is executed... Something like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); $data['item_id'] = $item_id; ob_start(); echo 'Here is the cached item id: '.$item_id; $data['html'] = ob_get_contents(); ob_end_clean(); $cache->save( serialize($data), "part1_cache_id" ); } $data = unserialize($data); echo $data['html'] echo never_cache_function($data['item_id']); Is there any other ways for doing such trick? I'm looking for the most high performance solution. Thank you UPDATED And another question is - how to implement such caching into controller without separating page into two templates? Is it possible? PS: Please, do not suggest Smarty, I'm really interested in implementing custom caching.

    Read the article

  • proper use of volatile keyword

    - by luke
    I think i have a pretty good idea about the volatile keyword in java, but i'm thinking about re-factoring some code and i thought it would be a good idea to use it. i have a class that is basically working as a DB Cache. it holds a bunch of objects that it has read from a database, serves requests for those objects, and then occasionally refreshes the database (based on a timeout). Heres the skeleton public class Cache { private HashMap mappings =....; private long last_update_time; private void loadMappingsFromDB() { //.... } private void checkLoad() { if(System.currentTimeMillis() - last_update_time > TIMEOUT) loadMappingsFromDB(); } public Data get(ID id) { checkLoad(); //.. look it up } } So the concern is that loadMappingsFromDB could be a high latency operation and thats not acceptable, So initially i thought that i could spin up a thread on cache startup and then just have it sleep and then update the cache in the background. But then i would need to synchronize my class (or the map). and then i would just be trading an occasional big pause for making every cache access slower. Then i thought why not use volatile i could define the map reference as volatile private volatile HashMap mappings =....; and then in get (or anywhere else that uses the mappings variable) i would just make a local copy of the reference: public Data get(ID id) { HashMap local = mappings; //.. look it up using local } and then the background thread would just load into a temp table and then swap the references in the class HashMap tmp; //load tmp from DB mappings = tmp;//swap variables forcing write barrier Does this approach make sense? and is it actually thread-safe?

    Read the article

< Previous Page | 311 312 313 314 315 316 317 318 319 320 321 322  | Next Page >