Search Results

Search found 3118 results on 125 pages for 'fragment caching'.

Page 74/125 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • What's the shebang in Facebook URLs for?

    - by BoltClock
    I've just noticed that the long, convoluted Facebook URLs that we're used to now look like this: http://www.facebook.com/example.profile#!/pages/Some-Other-Page/123456789012345 As far as I can recall, earlier this year it was just a normal URL-fragment-like string (starting with #), without the exclamation mark. But now it's a shebang (#!), which I've previously only seen in shell scripts and Perl scripts. Does #! now play some special role in URLs, like for a certain Ajax framework or something since Facebook's interface is now largely Ajaxified? Or is it for some other purpose?

    Read the article

  • Using Node.js as an accelerator for WCF REST services

    - by Elton Stoneman
    Node.js is a server-side JavaScript platform "for easily building fast, scalable network applications". It's built on Google's V8 JavaScript engine and uses an (almost) entirely async event-driven processing model, running in a single thread. If you're new to Node and your reaction is "why would I want to run JavaScript on the server side?", this is the headline answer: in 150 lines of JavaScript you can build a Node.js app which works as an accelerator for WCF REST services*. It can double your messages-per-second throughput, halve your CPU workload and use one-fifth of the memory footprint, compared to the WCF services direct.   Well, it can if: 1) your WCF services are first-class HTTP citizens, honouring client cache ETag headers in request and response; 2) your services do a reasonable amount of work to build a response; 3) your data is read more often than it's written. In one of my projects I have a set of REST services in WCF which deal with data that only gets updated weekly, but which can be read hundreds of times an hour. The services issue ETags and will return a 304 if the client sends a request with the current ETag, which means in the most common scenario the client uses its local cached copy. But when the weekly update happens, then all the client caches are invalidated and they all need the same new data. Then the service will get hundreds of requests with old ETags, and they go through the full service stack to build the same response for each, taking up threads and processing time. Part of that processing means going off to a database on a separate cloud, which introduces more latency and downtime potential.   We can use ASP.NET output caching with WCF to solve the repeated processing problem, but the server will still be thread-bound on incoming requests, and to get the current ETags reliably needs a database call per request. The accelerator solves that by running as a proxy - all client calls come into the proxy, and the proxy routes calls to the underlying REST service. We could use Node as a straight passthrough proxy and expect some benefit, as the server would be less thread-bound, but we would still have one WCF and one database call per proxy call. But add some smart caching logic to the proxy, and share ETags between Node and WCF (so the proxy doesn't even need to call the servcie to get the current ETag), and the underlying service will only be invoked when data has changed, and then only once - all subsequent client requests will be served from the proxy cache.   I've built this as a sample up on GitHub: NodeWcfAccelerator on sixeyed.codegallery. Here's how the architecture looks:     The code is very simple. The Node proxy runs on port 8010 and all client requests target the proxy. If the client request has an ETag header then the proxy looks up the ETag in the tag cache to see if it is current - the sample uses memcached to share ETags between .NET and Node. If the ETag from the client matches the current server tag, the proxy sends a 304 response with an empty body to the client, telling it to use its own cached version of the data. If the ETag from the client is stale, the proxy looks for a local cached version of the response, checking for a file named after the current ETag. If that file exists, its contents are returned to the client as the body in a 200 response, which includes the current ETag in the header. If the proxy does not have a local cached file for the service response, it calls the service, and writes the WCF response to the local cache file, and to the body of a 200 response for the client. So the WCF service is only troubled if both client and proxy have stale (or no) caches.   The only (vaguely) clever bit in the sample is using the ETag cache, so the proxy can serve cached requests without any communication with the underlying service, which it does completely generically, so the proxy has no notion of what it is serving or what the services it proxies are doing. The relative path from the URL is used as the lookup key, so there's no shared key-generation logic between .NET and Node, and when WCF stores a tag it also stores the "read" URL against the ETag so it can be used for a reverse lookup, e.g:   Key Value /WcfSampleService/PersonService.svc/rest/fetch/3 "28cd4796-76b8-451b-adfd-75cb50a50fa6" "28cd4796-76b8-451b-adfd-75cb50a50fa6" /WcfSampleService/PersonService.svc/rest/fetch/3    In Node we read the cache using the incoming URL path as the key and we know that "28cd4796-76b8-451b-adfd-75cb50a50fa6" is the current ETag; we look for a local cached response in /caches/28cd4796-76b8-451b-adfd-75cb50a50fa6.body (and the corresponding .header file which contains the original service response headers, so the proxy response is exactly the same as the underlying service). When the data is updated, we need to invalidate the ETag cache – which is why we need the reverse lookup in the cache. In the WCF update service, we don't need to know the URL of the related read service - we fetch the entity from the database, do a reverse lookup on the tag cache using the old ETag to get the read URL, update the new ETag against the URL, store the new reverse lookup and delete the old one.   Running Apache Bench against the two endpoints gives the headline performance comparison. Making 1000 requests with concurrency of 100, and not sending any ETag headers in the requests, with the Node proxy I get 102 requests handled per second, average response time of 975 milliseconds with 90% of responses served within 850 milliseconds; going direct to WCF with the same parameters, I get 53 requests handled per second, mean response time of 1853 milliseconds, with 90% of response served within 3260 milliseconds. Informally monitoring server usage during the tests, Node maxed at 20% CPU and 20Mb memory; IIS maxed at 60% CPU and 100Mb memory.   Note that the sample WCF service does a database read and sleeps for 250 milliseconds to simulate a moderate processing load, so this is *not* a baseline Node-vs-WCF comparison, but for similar scenarios where the  service call is expensive but applicable to numerous clients for a long timespan, the performance boost from the accelerator is considerable.     * - actually, the accelerator will work nicely for any HTTP request, where the URL (path + querystring) uniquely identifies a resource. In the sample, there is an assumption that the ETag is a GUID wrapped in double-quotes (e.g. "28cd4796-76b8-451b-adfd-75cb50a50fa6") – which is the default for WCF services. I use that assumption to name the cache files uniquely, but it is a trivial change to adapt to other ETag formats.

    Read the article

  • Java dropping half of UDP packets

    - by Andrew Klofas
    Greetings, I have a simple client/server setup. The server is in C and the client that is querying the server is Java. My problem is that, when I send bandwidth-intensive data over the connection, such as Video frames, it drops up to half the packets. I make sure that I properly fragment the udp packets on the server side (udp has a max payload length of 2^16). I verified that the server is sending the packets (printf the result of sendto()). But java doesn't seem to be getting half the data. Furthermore, when I switch to TCP, all the video frames get through but the latency starts to build up, adding several seconds delay after a few seconds of runtime. Is there anything obvious that I'm missing? I just can't seem to figure this out. Thanks, Andrew

    Read the article

  • Drag and drop image from and to fixed position on fixed path

    - by DMan
    I am trying to allow the user to drag and drop and image from on position to another. The screen layout is as follows: 1 2 3 4 5 6 7 8 9 I want the user to grab image 2, 4, 6, or 8 and drag it to image 5. Upon dragging to image 5 I want to load up a fragment. The user can only drag the image in a straight line from it's current position to 5's position. ie image 2 and only drag down and only until it is overtop of image 5, image 4 can only drag right until overtop of 5, etc. Any insight on how to do this is greatly appreciated. Thanks, DMan

    Read the article

  • SQL SERVER – SSIS Look Up Component – Cache Mode – Notes from the Field #028

    - by Pinal Dave
    [Notes from Pinal]: Lots of people think that SSIS is all about arranging various operations together in one logical flow. Well, the understanding is absolutely correct, but the implementation of the same is not as easy as it seems. Similarly most of the people think lookup component is just component which does look up for additional information and does not pay much attention to it. Due to the same reason they do not pay attention to the same and eventually get very bad performance. Linchpin People are database coaches and wellness experts for a data driven world. In this 28th episode of the Notes from the Fields series database expert Tim Mitchell (partner at Linchpin People) shares very interesting conversation related to how to write a good lookup component with Cache Mode. In SQL Server Integration Services, the lookup component is one of the most frequently used tools for data validation and completion.  The lookup component is provided as a means to virtually join one set of data to another to validate and/or retrieve missing values.  Properly configured, it is reliable and reasonably fast. Among the many settings available on the lookup component, one of the most critical is the cache mode.  This selection will determine whether and how the distinct lookup values are cached during package execution.  It is critical to know how cache modes affect the result of the lookup and the performance of the package, as choosing the wrong setting can lead to poorly performing packages, and in some cases, incorrect results. Full Cache The full cache mode setting is the default cache mode selection in the SSIS lookup transformation.  Like the name implies, full cache mode will cause the lookup transformation to retrieve and store in SSIS cache the entire set of data from the specified lookup location.  As a result, the data flow in which the lookup transformation resides will not start processing any data buffers until all of the rows from the lookup query have been cached in SSIS. The most commonly used cache mode is the full cache setting, and for good reason.  The full cache setting has the most practical applications, and should be considered the go-to cache setting when dealing with an untested set of data. With a moderately sized set of reference data, a lookup transformation using full cache mode usually performs well.  Full cache mode does not require multiple round trips to the database, since the entire reference result set is cached prior to data flow execution. There are a few potential gotchas to be aware of when using full cache mode.  First, you can see some performance issues – memory pressure in particular – when using full cache mode against large sets of reference data.  If the table you use for the lookup is very large (either deep or wide, or perhaps both), there’s going to be a performance cost associated with retrieving and caching all of that data.  Also, keep in mind that when doing a lookup on character data, full cache mode will always do a case-sensitive (and in some cases, space-sensitive) string comparison even if your database is set to a case-insensitive collation.  This is because the in-memory lookup uses a .NET string comparison (which is case- and space-sensitive) as opposed to a database string comparison (which may be case sensitive, depending on collation).  There’s a relatively easy workaround in which you can use the UPPER() or LOWER() function in the pipeline data and the reference data to ensure that case differences do not impact the success of your lookup operation.  Again, neither of these present a reason to avoid full cache mode, but should be used to determine whether full cache mode should be used in a given situation. Full cache mode is ideally useful when one or all of the following conditions exist: The size of the reference data set is small to moderately sized The size of the pipeline data set (the data you are comparing to the lookup table) is large, is unknown at design time, or is unpredictable Each distinct key value(s) in the pipeline data set is expected to be found multiple times in that set of data Partial Cache When using the partial cache setting, lookup values will still be cached, but only as each distinct value is encountered in the data flow.  Initially, each distinct value will be retrieved individually from the specified source, and then cached.  To be clear, this is a row-by-row lookup for each distinct key value(s). This is a less frequently used cache setting because it addresses a narrower set of scenarios.  Because each distinct key value(s) combination requires a relational round trip to the lookup source, performance can be an issue, especially with a large pipeline data set to be compared to the lookup data set.  If you have, for example, a million records from your pipeline data source, you have the potential for doing a million lookup queries against your lookup data source (depending on the number of distinct values in the key column(s)).  Therefore, one has to be keenly aware of the expected row count and value distribution of the pipeline data to safely use partial cache mode. Using partial cache mode is ideally suited for the conditions below: The size of the data in the pipeline (more specifically, the number of distinct key column) is relatively small The size of the lookup data is too large to effectively store in cache The lookup source is well indexed to allow for fast retrieval of row-by-row values No Cache As you might guess, selecting no cache mode will not add any values to the lookup cache in SSIS.  As a result, every single row in the pipeline data set will require a query against the lookup source.  Since no data is cached, it is possible to save a small amount of overhead in SSIS memory in cases where key values are not reused.  In the real world, I don’t see a lot of use of the no cache setting, but I can imagine some edge cases where it might be useful. As such, it’s critical to know your data before choosing this option.  Obviously, performance will be an issue with anything other than small sets of data, as the no cache setting requires row-by-row processing of all of the data in the pipeline. I would recommend considering the no cache mode only when all of the below conditions are true: The reference data set is too large to reasonably be loaded into SSIS memory The pipeline data set is small and is not expected to grow There are expected to be very few or no duplicates of the key values(s) in the pipeline data set (i.e., there would be no benefit from caching these values) Conclusion The cache mode, an often-overlooked setting on the SSIS lookup component, represents an important design decision in your SSIS data flow.  Choosing the right lookup cache mode directly impacts the fidelity of your results and the performance of package execution.  Know how this selection impacts your ETL loads, and you’ll end up with more reliable, faster packages. If you want me to take a look at your server and its settings, or if your server is facing any issue we can Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL Tagged: SSIS

    Read the article

  • Android: onClick on LinearLayout with TextView and Button

    - by Terry
    I have a Fragment that uses the following XML layout: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="match_parent" android:layout_height="match_parent" android:background="@drawable/card" android:clickable="true" android:onClick="editActions" android:orientation="vertical" > <TextView android:id="@+id/title" style="@style/CardTitle" android:layout_width="wrap_content" android:layout_height="wrap_content" android:duplicateParentState="true" android:text="@string/title_workstation" /> <Button android:id="@+id/factory_button_edit" style="?android:attr/buttonStyleSmall" android:layout_width="wrap_content" android:layout_height="wrap_content" android:duplicateParentState="true" android:text="@string/label_edit" /> </LinearLayout> As you can see, I have an onClick parameter set on LinearLayout. Now on the TextView this one is triggered correctly, and on all empty area. Just on the Button it doesn't invoke the onClick method that I set. Is this normal? What do I have to do so that the onClick method is invoked everywhere on the LinearLayout?

    Read the article

  • How to add global exception handling to a add-in dll?

    - by redjackwong
    Here's my context: I am writing a WPF add-in for an application. This Application's main thread is unmanaged. I want to add a global exception handling system for this add-in to handle any unhandled exceptions. Here's what I've tried but not working: I cannot add a try-catch block to my Application.Run() code line. Because I am an add-in, that code fragment is in the application. System.Windows.Forms.Application.ThreadException is not working too. There might not be an WinForm Application exists. (WPF hosting in unmanaged code.) AppDomain.CurrentDomain.UnhandledException is not working too. Because maybe it's handled by the Application itself. It just doesn't enter my code. So, any ideas for this situation?

    Read the article

  • Proper snowball analyzer configuration when using Grails Searchable Plugin

    - by Wirsbro
    To improve stemming we want to switch from the default analyzer to snowball, however, having a lot of difficulty with the proper settings and would appreciate any help. In Environment: - Sun's Java 1.6.16 - Grails 1.2.2 - Searchable Plug-In 0.5.5 Config.groovy: Have tried both settings: compassSettings = ['compass.engine.analyzer.stemmed.type': 'snowball', 'compass.engine.analyzer.stemmed.name': 'English'] compassSettings = ['compass.engine.analyzer.snowball.type': 'snowball', 'compass.engine.analyzer.snowball.name': 'English', 'compass.engine.analyzer.search.type': 'snowball', 'compass.engine.analyzer.search.name': 'English'] Search.groovy - The Invocation: def searchResult = searchableService.search(params.q, withHighlighter: { highlighter, index, sr if (!sr.highlights) { sr.highlights = [] } try { sr.highlights[index] = highlighter.fragments("content")[0..2].join(" ") } catch (IndexOutOfBoundsException ex) { sr.highlights[index] = highlighter.fragment("content") } }) def suggestion = searchableService.suggestQuery(params.q) if (suggestion != params.q) { searchResult.suggestedQuery = suggestion }

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • Regex to detect a proper permalink?

    - by Fedor
    These permalinks above are rerouted to my page: page.php?permalink=events/foo page.php?permalink=events/foo/ page.php?permalink=ru/events/foo page.php?permalink=ru/events/foo/ The events is dynamic, it could be specials or packages. My dilemma is basically; I need to detect an empty link in order so I can feed a robots no index meta tag in the case of: page.php?permalink=events page.php?permalink=events/ page.php?permalink=ru/events/ page.php?permalink=ru/events I can't use a simple pattern such as [a-zA-Z]+\/?(.+)/ since it won't work on the i18n permalinks. What regex could I use which would detect this, using $_GET['permalink'] as the reference to the permalinks? And avoid false positives? Update: Empty link means there's no fragment after the "events/" part. These are empty: page.php?permalink=events page.php?permalink=events/ page.php?permalink=ru/events/ page.php?permalink=ru/events

    Read the article

  • Help with Strings in C

    - by Anon
    Given the char * variables name1 , name2 , and name3 , write a fragment of code that assigns the largest value to the variable max (assume all three have already been declared and have been assigned values). I've tried and came up with this: if ((strcmp(name1,name2)0)&&(strcmp(name1,name3)0)){ max=name1; } else if ((strcmp(name2,name1)0)&&(strcmp(name2,name3)0)){ max=name2; } else if ((strcmp(name3,name1)0)&&(strcmp(name3,name2)0)){ max=name3; } However, I get this error Your code is incorrect. You are not handling the situation                where two or more strings are equal.

    Read the article

  • Using LINQ to XML, how can I join two sets of data based on ordinal position?

    - by Donald Hughes
    Using LINQ to XML, how can I join two sets of data based on ordinal position? <document> <set1> <value>A</value> <value>B</value> <value>C</value> </set1> <set2> <value>1</value> <value>2</value> <value>3</value> </set2> </document> Based on the above fragment, I would like to join the two sets together such that "A" and "1" are in the same record, "B" and "2" are in the same record, and "C" and "3" are in the same record.

    Read the article

  • Nitrogen: changing targetID breaks lightbox

    - by dewd
    I'm using Nitrogen & lightbox. I'm looking for some guidance after spending way too long trying to understand why a working example breaks as soon as I change the targetID of a lightbox. The fragment below works if I use "name_dialog" or "share_dialog", but not if I use "compose_dialog". I've looked through the source and style sheets, but have not found where those two are defined any differently than what I'm trying to do. In my .hrl: ... -record (compose_dialog, { ?ELEMENT_BASE(compose_dialog_element) }). .. In my element module: ... reflect() -> record_info(fields, compose_dialog). render_element(_HtmlID, _Record) -> #lightbox { id=compose_lightbox, style="display: none;", body = [ .. show() -> wf:wire(compose_lightbox, #show {}).

    Read the article

  • Nitrogen: changing targetID breaks lightbox

    - by dewd
    I'm using Nitrogen & lightbox. I'm looking for some guidance after spending way too long trying to understand why a working example breaks as soon as I change the targetID of a lightbox. The fragment below works if I use "name_dialog" or "share_dialog", but not if I use "compose_dialog". I've looked through the source and style sheets, but have not found where those two are defined any differently than what I'm trying to do. In my .hrl: ... -record (compose_dialog, { ?ELEMENT_BASE(compose_dialog_element) }). .. In my element module: ... reflect() -> record_info(fields, compose_dialog). render_element(_HtmlID, _Record) -> #lightbox { id=compose_lightbox, style="display: none;", body = [ .. show() -> wf:wire(compose_lightbox, #show {}).

    Read the article

  • QPainter paints garbage

    - by DSblizzard
    Fragment of program code: def add_link(Item0Num, Item1Num): global Mw, View # Mw - MainWindow if Item0Num != Item1Num and not link_exists(Item0Num, Item1Num): append( links_to(Item1Num), Item0Num ) append( links_from(Item0Num), Item1Num ) LinkGi = TLinkGi() Mw.Scene.addItem(LinkGi) LinkGi.setZValue(200) LinkGi.scale(1 / View.Scale, 1 / View.Scale) LinkGi.Item0Num = Item0Num LinkGi.Item1Num = Item1Num class TLinkGi(QGraphicsItem): def paint(self, Painter, StyleOptionGraphicsItem, Widget): global Mw, View Pen = QPen(Qt.black, 1) Painter.setPen(Pen) X0, Y0 = task_center(self.Item0Num) self.setPos(X0, Y0) X1, Y1 = task_center(self.Item1Num) X, Y = int( (X1 - X0) * View.Scale ), int( (Y1 - Y0) * View.Scale ) Painter.drawLine(0, 0, X, Y) #Mw.Scene.update(0, 0, Plan.Size, Plan.Size) # (1) #Mw.gvMain.repaint() # (2) def boundingRect(self): global View Rect = QRectF(0, 0, Plan.Size, Plan.Size) return Rect This paints such garbage: http://img697.imageshack.us/content_round.php?page=done&l=img697/5395/qpaintergarbage1.jpg When lines (1) and (2) are uncommented things doesn't become much better: http://img63.imageshack.us/content_round.php?page=done&l=img63/9693/qpaintergarbage0.jpg Please help me to solve this problem.

    Read the article

  • Different NSTimeZone names being returned

    - by Michael Waterfall
    I've noticed that on some devices the NSTimeZone's name method for a particular timezone can return different values. When testing the Brisbane time zone, my device returns @"Australia/Brisbane" whereas another user's device returns "Etc/GMT-10". Both iPhone's are running 3.1.2. The Date and Time Programming Guide for Cocoa states that: timeZoneWithName: The name passed to this method may be in any of the formats understood by the system, for example EST, Etc/GMT-2, America/Argentina/Buenos_Aires, Europe/Monaco, or US/Pacific, as shown in the following code fragment. I'd just like to know what could determine which value is used? The device? The language?

    Read the article

  • c# inheriting generic collection and serialization...

    - by Stecy
    Hi, The setup: class Item { private int _value; public Item() { _value = 0; } public int Value { get { return _value; } set { _value = value; } } } class ItemCollection : Collection<Item> { private string _name; public ItemCollection() { _name = string.Empty; } public string Name { get {return _name;} set {_name = value;} } } Now, trying to serialize using the following code fragment: ItemCollection items = new ItemCollection(); ... XmlSerializer serializer = new XmlSerializer(typeof(ItemCollection)); using (FileStream f = File.Create(fileName)) serializer.Serialize(f, items); Upon looking at the resulting XML I see that the ItemCollection.Name value is not there! I think what may be happening is that the serializer sees the ItemCollection type as a simple Collection thus ignoring any other added properties... Is there anyone having encountered such a problem and found a solution? Regards, Stécy

    Read the article

  • Christmas in the Clouds

    - by andrewbrust
    I have been spending the last 2 weeks immersing myself in a number of Windows Azure and SQL Azure technologies.  And in setting up a new business (I’ll speak more about that in the future), I have also become a customer of Microsoft’s BPOS (Business Productivity Online Services).  In short, it has been a fortnight of Microsoft cloud computing. On the Azure side, I’ve looked, of course, at Web Roles and Worker Roles.  But I’ve also looked at Azure Storage’s REST API (including coding to it directly), I’ve looked at Azure Drive and the new VM Role; I’ve looked quite a bit at SQL Azure (including the project “Houston” Silverlight UI) and I’ve looked at SQL Azure labs’ OData service too. I’ve also looked at DataMarket and its integration with both PowerPivot and native Excel.  Then there’s AppFabric Caching, SQL Azure Reporting (what I could learn of it) and the Visual Studio tooling for Azure, including the storage of certificate-based credentials.  And to round it out with some user stuff, on the BPOS side, I’ve been working with Exchange Online, SharePoint Online and LiveMeeting. I have to say I like a lot of what I’ve been seeing.  Azure’s not perfect, and BPOS certainly isn’t either.  But there’s good stuff in all these products, and there’s a lot of value. Azure Goes Deep Most people know that Web and Worker roles put the platform in charge of spinning virtual machines up and down, and keeping them up to date. But you can go way beyond that now.  The still-in-beta VM Role gives you the power to craft the machine (much as does Amazon’s EC2), though it takes away the platform’s self-managing attributes.  It still spins instances up and down, making drive storage non-durable, but Azure Drive gives you the ability to store VHD files as blobs and mount them as virtual hard drives that are readable and writeable.  Whether with Azure Storage or SQL Azure, Azure does data.  And OData is everywhere.  Azure Table Storage supports an OData Interface.  So does SQL Azure and so does DataMarket (the former project “Dallas”).  That means that Azure data repositories aren’t just straightforward to provision and configure…they’re also easy to program against, from just about any programming environment, in a RESTful manner.  And for more .NET-centric implementations, Azure AppFabric caching takes the technology formerly known as “Velocity” and throws it up into the cloud, speeding data access even more. Snapping in Place Once you get the hang of it, this stuff just starts to work in a way that becomes natural to understand.  I wasn’t expecting that, and I was really happy to discover it. In retrospect, I am not surprised, because I think the various Azure teams are the center of gravity for Redmond’s innovation right now.  The products belie this and so do my observations of the product teams’ motivation and high morale.  It is really good to see this; Microsoft needs to lead somewhere, and they need to be seen as the underdog while doing so.  With Azure, both requirements are in place.   BPOS: Bad Acronym, Easy Setup BPOS is about products you already know; Exchange, SharePoint, Live Meeting and Office Communications Server.  As such, it’s hard not to be underwhelmed by BPOS.  Until you realize how easy it makes it to get all that stuff set up.  I would say that from sign-up to productive use took me about 45 minutes…and that included the time necessary to wrestle with my DNS provider, set up Outlook and my SmartPhone up to talk to the Exchange account, create my SharePoint site collection, and configure the Outlook Conferencing add-in to talk to the provisioned Live Meeting account. Never before did I think setting up my own Exchange mail could come anywhere close to the simplicity of setting up an SMTP/POP account, and yet BPOS actually made it faster.   What I want from my Azure Christmas Next Year Not everything about Microsoft’s cloud is good.  I close this post with a list of things I’d like to see addressed: BPOS offerings are still based on the 2007 Wave of Microsoft server technologies.  We need to get to 2010, and fast.  Arguably, the 2010 products should have been released to the off-premises channel before the on-premise sone.  Office 365 can’t come fast enough. Azure’s Internet tooling and domain naming, is scattered and confusing.  Deployed ASP.NET applications go to cloudapp.net; SQL Azure and Azure storage work off windows.net.  The Azure portal and Project Houston are at azure.com.  Then there’s appfabriclabs.com and sqlazurelabs.com.  There is a new Silverlight portal that replaces most, but not all of the HTML ones.  And Project Houston is Silvelright-based too, though separate from the Silverlight portal tooling. Microsoft is the king off tooling.  They should not make me keep an entire OneNote notebook full of portal links, account names, access keys, assemblies and namespaces and do so much CTRL-C/CTRL-V work.  I’d like to see more project templates, have them automatically reference the appropriate assemblies, generate the right using/Imports statements and prime my config files with the right markup.  Then I want a UI that lets me log in with my Live ID and pick the appropriate project, database, namespace and key string to get set up fast. Beta programs, if they’re open, should onboard me quickly.  I know the process is difficult and everyone’s going as fast as they can.  But I don’t know why it’s so difficult or why it takes so long.  Getting developers up to speed on new features quickly helps popularize the platform.  Make this a priority. Make Azure accessible from the simplicity platforms, i.e. ASP.NET Web Pages (Razor) and LightSwitch.  Support .NET 4 now.  Make WebMatrix, IIS Express and SQL Compact work with the Azure development fabric. Have HTML helpers make Azure programming easier.  Have LightSwitch work with SQL Azure and not require SQL Express.  LightSwitch has some promising Azure integration now.  But we need more.  WebMatrix has none and that’s just silly, now that the Extra Small Instance is being introduced. The Windows Azure Platform Training Kit is great.  But I want Microsoft to make it even better and I want them to evangelize it much more aggressively.  There’s a lot of good material on Azure development out there, but it’s scattered in the same way that the platform is.   The Training Kit ties a lot of disparate stuff together nicely.  Make it known. Should Old Acquaintance Be Forgot All in all, diving deep into Azure was a good way to end the year.  Diving deeper into Azure should a great way to spend next year, not just for me, but for Microsoft too.

    Read the article

  • What is the correct way to specify dimensions in DIP from Java code ?

    - by Pavel Lahoda
    Trying to digest docs about design Multiple resolutions (http://d.android.com/guide/practices/screens_support.html#dips-pels) I found that it is possible to set dimensions of my interface elements in XML layouts using DIPs as in following fragment : "android:layout_width="10dip" But all Java interface takes integer as arguments and there is no way to specify dimensions in DIPs. What is the correct way to calculate this ? I figured that I have to use property density of DisplayMetrics class but is this a correct way ? May I relly that formula : pixels * DisplayMetrics.density = dip is always correct ? Will there be API in DIPs for Java ?

    Read the article

  • Encode complex number as RGB pixel and back

    - by Vi
    How is it better to encode a complex number into RGB pixel and vice versa? Probably (logarithm of) an absolute value goes to brightness and an argument goes to hue. Desaturated pixes should receive randomized argument in reverse transformation. Something like: 0 - (0,0,0) 1 - (255,0,0) -1 - (0,255,255) 0.5 - (128,0,0) i - (255,255,0) -i - (255,0,255) (0,0,0) - 0 (255,255,255) - e^(i * random) (128,128,128) - 0.5 * e^(i *random) (0,128,128) - -0.5 Are there ready-made formulas for that? Edit: Looks like I just need to convert RGB to HSB and back. Edit 2: Existing RGB - HSV converter fragment: if (hsv.sat == 0) { hsv.hue = 0; // ! return hsv; } I don't want 0. I want random. And not just if hsv.sat==0, but if it is lower that it should be ("should be" means maximum saturation, saturation that is after transformation from complex number).

    Read the article

  • How to get a flat, non-interpolated color when using vertex shaders.

    - by Brett
    Hi, Is there a way to achieve this? If I draw lines like this glShadeModel(GL_FLAT); glBegin(GL_LINES); glColor3f(1.0, 1.0, 0.0); glVertex3fv(bottomLeft); glVertex3fv(topRight); glColor3f(1.0, 0.0, 0.0); glVertex3fv(topRight); glVertex3fv(topLeft); . . (draw a square) . . glEnd(); I get the desired result (a different colour for each edge) but I want to be able to calculate the fragment values in a shader. If I do the same after setting up my shader program I always get interpolated colors between vertices. Is there a way around this? (would be even better if I could get the same results using quads) Thanks

    Read the article

  • Escape characters during paste in vim

    - by Michael Anderson
    I copy stuff from output buffers into C++ code I'm working on in vim. Often this output gets stuck into strings. And it'd be nice to be able to escape all the control characters automatically rather than going back and hand editing the pasted fragment. As an example I might copy something like this: error in file "foo.dat" And need to put it into something like this std::string expected_error = "error in file \"foo.dat\"" I'm thinking it might be possible to apply a replace function to the body of the last paste using the start and end marks of the last paste, but I'm not sure how to make it fly.

    Read the article

  • static arrays defined with unspecified size, empty brackets?

    - by ahmadabdolkader
    For the C++ code fragment below: class Foo { int a[]; // no error }; int a[]; // error: storage size of 'a' isn't known void bar() { int a[]; // error: storage size of 'a' isn't known } why isn't the member variable causing an error too? and what is the meaning of this member variable? I'm using gcc version 3.4.5 (mingw-vista special) through CodeBlocks 8.02. On Visual Studio Express 2008 - Microsoft(R) C/C++ Optimizing Compiler 15.00.30729.01 for 80x86, I got the following messages: class Foo { int a[]; // warning C4200: nonstandard extension used : zero-sized array in struct/union - Cannot generate copy-ctor or copy-assignment operator when UDT contains a zero-sized array }; int a[]; void bar() { int a[]; // error C2133: 'a' : unknown size } Now, this needs some explaination too.

    Read the article

  • Deleting an Image that has been used by a WPF control

    - by Peter
    Hi, I would like to bind an Image to some kind of control an delete it later on. path = @"c:\somePath\somePic.jpg" FileInfo fi = new FileInfo(path); Uri uri = new Uri(fi.FullName, UriKind.Absolute); var img = new System.Windows.Controls.Image(); img.Source = new BitmapImage(uri); Now after this code I would like to delete the file : fi.Delete(); But I cannot do that since the image is being used now. Between code fragment 1 en 2 what can I do to release it? Thanks

    Read the article

  • Stretch column table in Primefaces

    - by Orange91
    How I can prevent to stretch table when I input long text: Screen: http://zapodaj.net/71821572f2445.jpg.html Meyby is it possible to make the text lines stretched row horizontally? My fragment example table: <p:dataTable id="table" styleClass="table" value="#{userMB.allInactive}" var="inactive" paginator="true" rows="15" rowKey="#{inactive.id}" selection="#{userMB.user}" selectionMode="single" > <f:facet name="header"> Lista kont nieaktywnych </f:facet> <p:column headerText="#{msg.firstName}"> <h:outputText value="#{inactive.firstName}" /> </p:column> I tried <p:column headerText="#{msg.firstName}" width="20px"> styleClass for column: <p:column styleClass="column" headerText="#{msg.firstName}" width="20px"> .column { width: 20px; } but I do not see any change, it does not work.

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >