Search Results

Search found 64995 results on 2600 pages for 'data import'.

Page 163/2600 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • Accessing Server-Side Data from Client Script: Using Ajax Web Services, Script References, and jQuery

    Today's websites commonly exchange information between the browser and the web server using Ajax techniques. In a nutshell, the browser executes JavaScript code typically in response to the page loading or some user action. This JavaScript makes an asynchronous HTTP request to the server. The server processes this request and, perhaps, returns data that the browser can then seamlessly integrate into the web page. Typically, the information exchanged between the browser and server is serialized into JSON, an open, text-based serialization format that is both human-readable and platform independent. Adding such targeted, lightweight Ajax capabilities to your ASP.NET website requires two steps: first, you must create some mechanism on the server that accepts requests from client-side script and returns a JSON payload in response; second, you need to write JavaScript in your ASP.NET page to make an HTTP request to this service you created and to work with the returned results. This article series examines a variety of techniques for implementing such scenarios. In Part 1 we used an ASP.NET page and the JavaScriptSerializer class to create a server-side service. This service was called from the browser using the free, open-source jQuery JavaScript library. This article continues our examination of techniques for implementing lightweight Ajax scenarios in an ASP.NET website. Specifically, it examines how to create ASP.NET Ajax Web Services on the server-side and how to use both the ASP.NET Ajax Library and jQuery to consume them from the client-side. Read on to learn more! Read More >

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >

    Read the article

  • Data Web Controls Enhancements in ASP.NET 4.0

    Traditionally, developers using Web controls enjoyed increased productivity but at the cost of control over the rendered markup. For instance, many ASP.NET controls automatically wrap their content in <table> for layout or styling purposes. This behavior runs counter to the web standards that have evolved over the past several years, which favor cleaner, terser HTML; sparing use of tables; and Cascading Style Sheets (CSS) for layout and styling. Furthermore, the <table> elements and other automatically-added content makes it harder to both style the Web controls using CSS and to work with the controls from client-side script. One of the aims of ASP.NET version 4.0 is to give Web Form developers greater control over the markup rendered by Web controls. Last week's article, Take Control Of Web Control ClientID Values in ASP.NET 4.0, highlighted how new properties in ASP.NET 4.0 give the developer more say over how a Web control's ID property is translated into a client-side id attribute. In addition to these ClientID-related properties, many Web controls in ASP.NET 4.0 include properties that allow the page developer to instruct the control to not emit extraneous markup, or to use an HTML element other than <table>. This article explores a number of enhancements made to the data Web controls in ASP.NET 4.0. As you'll see, most of these enhancements give the developer greater control over the rendered markup. Read on to learn more! Read More >Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Changing Endpoint URL for a Web Service Data Control

    - by vishal.s.jain(at)oracle.com
    When you move your application from Development to Production, there is more often then not, a need to change the web service endpoint URL in your ADF application. If you are using a Web Service Data Control(WSDC), you can do this in more than one ways. The following example illustrates how this can be done.At Design TimeIf the application workspace is in your control, you can quickly do this by updating the definition in DataControl.dcx file:Along with this, you will also need to change the endpoint in connections.xml. So invoke the Edit Connections dialog: Then, change the endpoint URL.At DeploymentAnother way to change is changing the endpoint at the ear level, at deployment. So when you select Deploy -> Application Server at the Application level, it will bring up a Deployment Configuration dialog, in which you can edit the WSDL URL:Also, change the Port URL:At Post DeploymentIf your need to change this post deployment, you can do it through Oracle Enterprise Manager. But for this, your application needs to be configured with a writable MDS repository. It is recommended you use a Database MDS store during deployment. So have your application configured (by having an entry in adf-config.xml) and server configured (by having a MDS store registered). Once done, you can configure the ADF Connection in EM for this application:Change the WSDL location here on 'Edit':Also, change the Port using Advance Connection Configuration:Change the Endpoint Address here:Apply Changes and you are done!

    Read the article

  • Twitter status id conundrum

    - by jamiet
    I have an interest, a slightly perverse one some might say, in using online services and trying to figure out what the underlying (logical) data model is and in this day and age Twitter is one that lends itself very well to scrutiny. Consider this recent tweet of mine: The URL that enables you to see that tweet is http://twitter.com/jamiet/status/12154647354. We can interpret that URL to mean "a tweet by jamiet with an id of 12154647354" and hence we might further assume that the unique identifier for the tweet is {jamiet,12154647354}. However, its well-known that Twitter gives each status a unique ID regardless of who tweeted it so we might expect we could reach that tweet just by using a URL of http://twitter.com/status/12154647354 however (at the time of writing) that only redirects to Twitter's homepage. That seems strange to me especially given that we can use Twitter's API to access information about that tweet using only the id of the status. Witness http://api.twitter.com/1/statuses/show/12154647354.xml: [We can also access a JSON version of that information using http://api.twitter.com/1/statuses/show/12154647354.json] I'm puzzled as to why a tweet can't be accessed using on the main twitter website using the id alone. Anyone have any suggestions? @jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQL SERVER – Plan Cache and Data Cache in Memory

    - by pinaldave
    I get following question almost all the time when I go for consultations or training. I often end up providing the scripts to my clients and attendees. Instead of writing new blog post, today in this single blog post, I am going to cover both the script and going to link to original blog posts where I have mentioned about this blog post. Plan Cache in Memory USE AdventureWorks GO SELECT [text], cp.size_in_bytes, plan_handle FROM sys.dm_exec_cached_plans AS cp CROSS APPLY sys.dm_exec_sql_text(plan_handle) WHERE cp.cacheobjtype = N'Compiled Plan' ORDER BY cp.size_in_bytes DESC GO Further explanation of this script is over here: SQL SERVER – Plan Cache – Retrieve and Remove – A Simple Script Data Cache in Memory USE AdventureWorks GO SELECT COUNT(*) AS cached_pages_count, name AS BaseTableName, IndexName, IndexTypeDesc FROM sys.dm_os_buffer_descriptors AS bd INNER JOIN ( SELECT s_obj.name, s_obj.index_id, s_obj.allocation_unit_id, s_obj.OBJECT_ID, i.name IndexName, i.type_desc IndexTypeDesc FROM ( SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id ,allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.hobt_id AND (au.TYPE = 1 OR au.TYPE = 3) UNION ALL SELECT OBJECT_NAME(OBJECT_ID) AS name, index_id, allocation_unit_id, OBJECT_ID FROM sys.allocation_units AS au INNER JOIN sys.partitions AS p ON au.container_id = p.partition_id AND au.TYPE = 2 ) AS s_obj LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj ON bd.allocation_unit_id = obj.allocation_unit_id WHERE database_id = DB_ID() GROUP BY name, index_id, IndexName, IndexTypeDesc ORDER BY cached_pages_count DESC; GO Further explanation of this script is over here: SQL SERVER – Get Query Plan Along with Query Text and Execution Count Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL Tagged: SQL Memory

    Read the article

  • How to cross-reference many character encodings with ASCII OR UTFx?

    - by Garet Claborn
    I'm working with a binary structure, the goal of which is to index the significance of specific bits for any character encoding so that we may trigger events while doing specific checks against the profile. Each character encoding scheme has an associated system record. This record's leading value will be a C++ unsigned long long binary value and signifies the length, in bits, of encoded characters. Following the length are three values, each is a bit field of that length. offset_mask - defines the occurrence of non-printable characters within the min,max of print_mask range_mask - defines the occurrence of the most popular 50% of printable characters print_mask - defines the occurrence value of printable characters The structure of profiles has changed from the op of this question. Most likely I will try to factorize or compress these values in the long-term instead of starting out with ranges after reading more. I have to write some of the core functionality for these main reasons. It has to fit into a particular event architecture we are using, Better understanding of character encoding. I'm about to need it. Integrating into non-linear design is excluding many libraries without special hooks. I'm unsure if there is a standard, cross-encoding mechanism for communicating such data already. I'm just starting to look into how chardet might do profiling as suggested by @amon. The Unicode BOM would be easily enough (for my current project) if all encodings were Unicode. Of course ideally, one would like to support all encodings, but I'm not asking about implementation - only the general case. How can these profiles be efficiently populated, to produce a set of bitmasks which we can use to match strings with common characters in multiple languages? If you have any editing suggestions please feel free, I am a lightweight when it comes to localization, which is why I'm trying to reach out to the more experienced. Any caveats you may be able to help with will be appreciated.

    Read the article

  • SQL Authority News – Download SQL Server Data Type Conversion Chart

    - by pinaldave
    Datatypes are very important concepts of SQL Server and there are quite often need to convert them from one datatypes to another datatype. I have seen that deveoper often get confused when they have to convert the datatype. There are two important concept when it is about datatype conversion. Implicit Conversion: Implicit conversions are those conversions that occur without specifying either the CAST or CONVERT function. Explicit Conversions: Explicit conversions are those conversions that require the CAST or CONVERT function to be specified. What it means is that if you are trying to convert value from datetime2 to time or from tinyint to int, SQL Server will automatically convert (implicit conversation) for you. However, if you are attempting to convert timestamp to smalldatetime or datetime to int you will need to explicitely convert them using either CAST or CONVERT function as well appropriate parameters. Let us see a quick example of Implict Conversion and Explict Conversion. Implicit Conversion: Explicit Conversion: You can see from above example that how we need both of the types of conversion in different situation. There are so many different datatypes and it is humanly impossible to know which datatype require implicit and which require explicit conversion. Additionally there are cases when the conversion is not possible as well. Microsoft have published a chart where the grid displays various conversion possibilities as well a quick guide. Download SQL Server Data Type Conversion Chart Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Call for Abstracts Now Open for Microsoft ASP.NET Connections (Closing April 26)

    - by plitwin
    We are putting out a call for abstracts to present at the Fall 2010 Microsoft ASP.NET Connections conference in Las Vegas, Nov 9-13 2009. The due date for submissions is April 26, 2010. For submitting sessions, please use this URL: http://www.deeptraining.com/devconnections/abstracts Please keep the abstracts under 200 words each and in one paragraph. No bulleted items and line breaks, and please use a spell-checker. Do not email abstracts, you need to use the web-based tool to submit them. Please submit at least 3 abstracts, but it would help your chances of being selected if you submitted 5 or more abstracts. Also, you are encouraged to suggest all-day pre or post conference workshops as well. We need to finalize the conference content and the tracks layout in just a few short weeks, so we need your abstracts by April 26th. No exceptions will be granted on late submissions! Topics of interest include (but are not limited to):* ASP.NET Webforms* ASP.NET AJAX* ASP.NET MVC* Dynamic Data* Anything else related to ASP.NET For Fall 2010, we are having a seperate Silverlight conference where you can submit abstracts for Silverlight and Windows 7 Phone Development. In fact, you can use the same URL to submit sessions to Microsoft ASP.NET Connections, Silverlight Connections, Visual Studio Connections, or SQL Server Connections. The URL again is:http://www.deeptraining.com/devconnections/abstracts Please realize that while we want a lot of the new and the cool, it's also okay to propose sessions on the more mundane "real world" stuff as it pertains to ASP.NET. What you will get if selected:* $500 per regular conference talk.* Compensation for full-day workshops ranges from $500 for 1-20 attendees to $2500 for 200+ attendees.* Coach airfare and hotel stay paid by the conference.* Free admission to all of the co-located conferences* Speaker party* The adoration of attendees* etc. Your continued suport of Microsoft ASP.NET Connections and the other DevConnections conferences is appreciated. Good luck and thank you,Paul LitwinMicrosoft ASP.NET Conference Chair

    Read the article

  • Demantra USA Based Companies and SOX Compliance

    - by user702295
    A USA based company is assessing Demantra Trade Promotion Management (TPM) capability.  It appears that SOX is necessary in their case due to the nature of what TPM does and the necessity for auditability.  Do we have any detail on SOX compliance for Demantra? Answser ------- SOX compliance with regards to IT: 1.  Requires auditing of data changes done by who, what, when     a. Audit trail profiles can be set up for key financial series and view them in audit trail reports     b. One functionality we do not have which typically is asked for is user login history. We have only        active sessions, history is not available. 2.  Segregation of duties     a. With respect to TPM, you could have deduction and financial analyst for settlement be different        from promotion creator, promotion approver or sales team.     b. Budget Approver for funds can be different from funds consumer.     c. Promotion creator can be different than promotion approver     d. For a US customer you may have to write some custom scripts to capture promotion status change        and produce an external report as part of compliance. One additional requirement is transparency of forward commitments entered into with retailers / distributors for trade spending, promotions.  Outside of Demantra - Consumer Goods Trade Funds Analytics.

    Read the article

  • SQL SERVER – Simple Installation of Master Data Services (MDS) and Sample Packages – Very Easy

    - by pinaldave
    I twitted recently about: ‘Installing #sql Server 2008 R2 – Master Data Services. Painless.‘ After doing so, I got quite a few emails from other users as to why I thought it was painless. The reason was very simple- I was able to install it rather quickly on my laptop without any issues. There were a few requests along with these emails sent to me, which regards to how to install MDS, as well sample databases. Please note that I am the admin of my machine and I installed this MDS as the admin as well. Talk to your network administrator to figure out the best suitable settings for better security of login users. Additionally, since MDS is only supported on a 64-bit machine, I had rebuilt my computer a week before with a 64-bit OS and 64-bit SQL Server to go with it. Here is a quick picture tour of the installation. First of all, go to your SQL Server 2008. Install self-extracted folder and find the .msi file for C:\1033_enu_lp\x64\setup\masterdataservices.msi. Once you clicked on the file, follow the image tour below. You can ask me any questions in case you are still confused with any of the steps of the installation. While searching the internet for a similar installation process, I have landed on the official blog of MDS team where they have many interesting posts about it. If any concept written on my posts are contradictory to the information on the official blog, I suggest that you should follow the advice of official blog. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Leverage Location Data with Maps in Your Apps

    - by stephen.garth
    Free Webinar: "Add Maps to Your Java Applications - The Easy Way" Wednesday May 26 at 9:00am Pacific Time Putting maps in your apps is a great way to put your apps on the map! Maps provide a location context that can trigger those "aha" moments leading to better business decisions. Tune into this free webinar to find out how easy it is to leverage spatial and location data, much of which is already in your Oracle Database. NAVTEQ's Dan Abugov and Oracle's Shay Shmeltzer combine their considerable experience with Oracle Spatial and Java application development to demonstrate how you can quickly and easily add maps to your Java applications, leveraging Oracle Spatial 11g, Oracle Fusion Middleware MapViewer, Oracle JDeveloper and ADF 11g. Register here. Learn more about Oracle spatial and location technology var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • SQL Bits X – Temporal Snapshot Fact Table Session Slide & Demos

    - by Davide Mauri
    Already 10 days has passed since SQL Bits X in London. I really enjoyed it! Those kind of events are great not only for the content but also to meet friends that – due to distance – is not possible to meet every day. Friends from PASS, SQL CAT, Microsoft, MVP and so on all in one place, drinking beers, whisky and having fun. A perfect mixture for a great learning and sharing experience! I’ve also enjoyed a lot delivering my session on Temporal Snapshot Fact Tables. Given that the subject is very specific I was not expecting a lot of attendees….but I was totally wrong! It seems that the problem of handling daily snapshot of data is more common than what I expected. I’ve also already had feedback from several attendees that applied the explained technique to their existing solution with success. This is just what a speaker in such conference wish to hear! :) If you want to take a look at the slides and the demos, you can find them on SkyDrive: https://skydrive.live.com/redir.aspx?cid=377ea1391487af21&resid=377EA1391487AF21!1151&parid=root The demo is available both for SQL Sever 2008 and for SQL Server 2012. With this last version, you can also simplify the ETL process using the new LEAD analytic function. (This is not done in the demo, I’ve left this option as a little exercise for you :) )

    Read the article

  • ASP.NET: Using conditionals in data binding expressions

    - by DigiMortal
    ASP.NET 2.0 has no support for using conditionals in data binding expressions but it will change in ASP.NET 4.0. In this posting I will show you how to implement Iif() function for ASP.NET 2.0 and how ASP.NET 4.0 solves this problem smoothly without any code. Problem Let’s say we have simple repeater. <asp:Repeater runat="server" ID="itemsList">     <HeaderTemplate>         <table border="1" cellspacing="0" cellpadding="5">     </HeaderTemplate>     <ItemTemplate>         <tr>         <td align="right"><%# Container.ItemIndex + 1 %>.</td>         <td><%# Eval("Title") %></td>         </tr>     </ItemTemplate>     <FooterTemplate>         </table>     </FooterTemplate> </asp:Repeater> Repeater is bound to data when form loads. protected void Page_Load(object sender, EventArgs e) {     var items = new[] {                     new { Id = 1, Title = "Headline 1" },                     new { Id = 2, Title = "Headline 2" },                     new { Id = 2, Title = "Headline 3" },                     new { Id = 2, Title = "Headline 4" },                     new { Id = 2, Title = "Headline 5" }                 };     itemsList.DataSource = items;     itemsList.DataBind(); } We need to format even and odd rows differently. Let’s say we want even rows to be with whitesmoke background and odd rows with white background. Just like shown on screenshot on right. Our first thought is to use some simple expression to avoid writing custom methods. We cannot use construct like this <%# Container.ItemIndex % 2==0 ? "white" : "whitesmoke"  %> because all we get are template compilation errors. ASP.NET 2.0: Iif() method For ASP.NET 2.0 pages and controls we can create Iif() method and call it from our templates. This is out Iif() method. protected object Iif(bool condition, object trueResult, object falseResult) {     return condition ? trueResult : falseResult; } And here you can see how to use it. <asp:Repeater runat="server" ID="itemsList">   <HeaderTemplate>     <table border="1" cellspacing="0" cellpadding="5">     </HeaderTemplate>   <ItemTemplate>     <tr style='background-color:'       <%# Iif(Container.ItemIndex % 2==0 ? "white" : "whitesmoke") %>'>       <td align="right">         <%# Container.ItemIndex + 1 %>.</td>       <td>         <%# Eval("Title") %></td>     </tr>   </ItemTemplate>   <FooterTemplate>     </table>   </FooterTemplate> </asp:Repeater> This method does not care about types because it works with all objects (and value-types). I had to define this method in code-behind file of my user control because using this method as extension method made it undetectable for ASP.NET template engine. ASP.NET 4.0: Conditionals are supported In ASP.NET 4.0 we will write … hmm … we will write nothing special. Here is solution. <asp:Repeater runat="server" ID="itemsList">   <HeaderTemplate>     <table border="1" cellspacing="0" cellpadding="5">     </HeaderTemplate>   <ItemTemplate>     <tr style='background-color:'       <%# Container.ItemIndex % 2==0 ? "white" : "whitesmoke" %>'>       <td align="right">         <%# Container.ItemIndex + 1 %>.</td>       <td>         <%# Eval("Title") %></td>     </tr>   </ItemTemplate>   <FooterTemplate>     </table>   </FooterTemplate> </asp:Repeater> Yes, it works well. :)

    Read the article

  • XNA Required information to represent 2D Sprite graphically

    - by Fire-Dragon-DoL
    I was thinking about dividing my game engine into 2 threads: render thread and update thread (I can't come up on how to divide update thread from physic thread at the moment). That said, I have to duplicate all Sprite informations, what do I really need to represents a 2D Sprite graphically? Here are my ideas (I'll mark with ? things that I'm not sure): Vector2 Position float Rotation ? Vector2 Pivot ? Rectangle TextureRectangle Texture2D Texture Vector2 ImageOrigin ? (is it tracked somewhere else?) If you have any suggestion about using different types for datas, it's appreciated Last part of the question: isn't this a lot of data to copy in a buffer?what should I really copy in the buffer?I'm following this tutorial: http://www.sgtconker.com/2009/11/article-multi-threading-your-xna/3/ Thanks UPDATE 1: Newer values at the moment: Vector2 Position float Rotation Vector2 Pivot Rectangle TextureRectangle Texture2D Texture Color Color byte Facing (can be left or right, I'll do it with an enum) I re-read the tutorial, what I was doing wrong is not that I need to pass all those values, I need to pass only changed values as messages. UPDATE 2: Vector2 Position float Rotation Vector2 Pivot Rectangle TextureRectangle Texture2D Texture Color Color bool Flip uint DrawOrder Vector2 Scale bool Visible ? Mhhh, should Visibile be included?

    Read the article

  • Gestão do Conhecimento 2.0 - Data Adiada para 30 de Junho

    - by Claudia Costa
    Nas organizações o conceito de intranet está a evoluir de um simples repositório de documentos e links para uma plataforma colaborativa, onde os colaboradores podem consultar, navegar, publicar, analisar, comentar e valorizar os seus conhecimentos e de outros.   Durante esta sessão apresentaremos os produtos e proposta de valor da Oracle para a evolução da intranet e gestão do conhecimento 2.0 (também conhecido como Social KM). Clique aqui e registe-se.   Agenda (Oracle, Lagoas Park/ 9:30-14:30) 09:15 - Café de Boas Vindas & Registo 09:30 - Gestão do Conhecimento 2.0 10:30 - Demo de GdC 2.0 com Oracle 11:00 - Coffee Break 11:30 - Oracle WebCenter Framework 12:30 - Oracle WebCenter Spaces 13:30 - Conclusão   Pré-requisitos Cada participante deverá trazer o seu Laptop preparado com as seguintes características: ·         2GB RAM, com acesso a WiFi ·         Disco rígido com 25GB de espaço livre (caso queira gravar a máquina virtal a disponibilizar durante a sessão)    Clique aqui e registe-se.   * Pedimos desculpa por esta alteração.  Caso surja algum impedimento em poder participar nesta nova data, agradeço por favor que nos informe.    

    Read the article

  • passing data to php in ajax [closed]

    - by MertMETIN
    i try to pass my data, actually checkboxes has some datas in their ids, and i read try to read them. like that, <input align="right" type=checkbox name="checkArtist[]" class="checkClass" id ="{$movieId}-{$mCast.id}"></li> dont worry about {$movieId}-{$mCast.id} They are template lite tags. I successfully read checkboxes datas in ajax side. The problem is that i cannot send these datas to my php. var artistIds = new Array(); $(".p16 input:checked").each()(function(){ artistIds.push($(this).attr('id')); }); $.post('/json/crewonly/deleteDataAjax2', { 'artistIds': artistIds },function(response){ if(response=='ok') alert("ok"); }); above code is my ajax but in php side $artistIds is always empty. I found a topic in stackoverflow http://stackoverflow.com/questions/5571646/how-to-pass-a-javascript-array-via-jquery-post-so-that-all-its-contents-are-acce It says how to pass js arrays $.post('/url/to/page', {'someKeyName': variableName}); This is same with my code. What's going wrong ?

    Read the article

  • XNA - Saving Game Data without TypeWriter?

    - by Zabby
    In the past, I've had some trouble with reading and writing content for XNA. I think I'm firmly set on this tutorial which sets up a four project structure (Game, Content, Library, Pipeline Extension) for the solution that other websites suggest, too. The options it offers for content reading seem great. But problem! This tutorial (and all others I've found) have stated that the Content Pipeline Extension Project is not distributed with the game, which is fine in itself, but this is combined with the fact that any content writing objects are placed in this non-distributable library. The ability to actually write content of an already existing type (save game files, namely) is pretty critical to the project I'm trying to make. I've already learned the hard way you simply can't place the Content Pipeline reference in another project besides the extension for easier access to the intermediate deserializer. Is there another object I can access for writing save game data? Am I actually, despite the warnings of this tutorial, able to use the TypeWriter outside of the Content Pipeline Extension Project? Or is there a third option I am missing here?

    Read the article

  • Handling changes to data types and entries in a database migration

    - by jandjorgensen
    I'm fully redesigning a site that indexes a number of articles with basic search functionality. The previous site was written about a decade ago, and I'm salvaging about 30,000 entries with data stored in less-than-ideal formats. While I'm moving from MSSQL to MySQL, I don't need to make any "live" changes, so this is not a production-level migration issue so much as a redesign. For instance, dates are stored the same as tags/subjects about the articles, but in strings as "YYYYMMDDd" (the lowercase d stands for "date" in the string). Essentially, before or after I move from the previous database format to a new one, I'm going to need to do a lot of replacement of individual entries. While I understand how to do operations with regular expressions in non-database issues, my database experience isn't robust enough to know the best way to handle this. What is the best (or standard) way to handle major changes like this? Is there an SQL operation I should be looking into? Please let me know if the problem isn't clear--I'm not entirely sure what kind of answer I'm looking for.

    Read the article

  • SMART Status Data Interpretation - Disk Utility

    - by Mah
    Last week my external harddisk (Seagate Barracuda 1.5TB in a custom enclosure) showed signs of failure (Disk Utility SMART Pre-failure status - several bad sectors) and I decided to change it. I bought a new HDD (Seagate Barracuda 2TB) and connected it to my Ubuntu box with a SATA to USB cable that could not report SMART status. I copied all the contents of the old HDD to the new HDD (one partition with rsync, the other with parted cp) and then gently replaced the old HDD with the new one inside my aluminum enclosure. For obscure reasons after reconnecting the new HDD through the old enclosure, the Linux box could not detect my partitions. I recovered the partitions with testdisk and restarted the computer. After the restart I checked the SMART status of the new HDD an I get this: Read Error Rate --------------- Normalized 108 Worst 99 Threshold 6 Value 16737944 I got a high value on the Seek Error Rate as well. Wondering why this happens I copied 2 GB directory from one partition to the other and rechecked the SMART status (5 minutes later). This time I got the following: Read Error Rate --------------- Normalized 109 Worst 99 Threshold 6 Value 24792504 As you see there has been an increase in the error rate. I am unable to interpret these numbers. Is my new hard disk already dying? What are the acceptable values in these fields for Seagate hard disks? Then why the assessment is still good? While I could get temperature and airflow temperature data from my old HDD, I can not fetch them for the new one. I noticed that my old hdd had got really hot sometimes. Is it possible that the enclosure is killing the harddisks due to high temperature?... Thanks

    Read the article

  • Controlling what data populates STAR

    - by user10747017
    Beginning with the Primavera Reporting Database 2.2\P6 Analytics 1.2 release, the first release that supported the P6 Extended Schema, a new ability was added to filter which projects could be included during an ETL run. In previous releases, all projects were included in an ETL run. Additionally, all projects with the option to enable publication are included in the ETL run by default.Because the reporting needs for P6 Extended Schema are different from those of STAR, you can define a filter that will limit the data that is included in the STAR schema. For example, your STAR schema can be filter to only include all projects in a specific Portfolio, or all projects with a project code assignment of 'For Analytics.'  Any criteria that can be defined in a Where clause and added to a view can be used to filter the projects included in the STAR schema. I highly suggest this approach when dealing with large databases. Unnecessary projects could cause the Extract portion of the ETL process to take longer. A table in STAR called etl_projectlist is the key for what projects are targeted during the ETL process. To setup the filter, perform the following steps:1. Connect to your Primavera P6 Project Management Database as Pxrptuser (extended schema owner) and create a new view:create or replace view star_project_viewasselect PROJECTOBJECTID objectidfrom projectportfolio pp, projectprojectportfolio pppwhere pp.objectid = ppp.PROJECTPORTFOLIOOBJECTIDand pp.name = 'STAR Projects'--The main field that MUST be selected in the view is the projectobjectid. Selecting any other field besides the projectobjectid will cause the view to be invalid and will not work. Any Where clause can be used, but projectobjectid is the key.2. In your STAR installation directory go the \res folder and edit the staretl.properties file.  Here you will define the view to be used.  Add the following line or update if exists:star.project.filter.ds1=star_project_view3. When running the  staretl.cmd or staretl.sh process the database link to Pxrtpuser will be accessed and this view will be used to populate the etl_projectlist table  with the appropriate projectobjectids as defined in the view created in step 1 above.

    Read the article

  • What happens to remounted data/directories

    - by cauon
    According to suggestions in this post I am trying to improve my system to run better with a Solid State Drive. But regarding to RAMdisks and /etc/fstab usage I have some understanding problems coming up. So let's say I add the following lines to /etc/fstab tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/spool tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/log tmpfs defaults,noatime,nodiratime,mode=0755 0 0 I know that on startup these locations should now get mounted into the RAM (hopefully). But what happens to the physical space that was mounted on those places before? Is it gone? Will it be back when I edit my /etc/fstab back to the Version without tmpfs? Will the space still be allocated on my SSD in a way that I can't use it for any other data? Sometimes it is suggested to add the following line, too: none /var/cache aufs dirs=/tmp:/var/cache=ro 0 0 What does this actually do? I noticed that /var/cache takes almost 1GB of space on my harddisk. So should i clear the directory before activating this line? (this is related to the former question) This causes me some confusions and I hope you can give me some clarifications. UPDATE I've downloaded a image with 600MB in size into /tmp that is mounted with the tmpfs settings above. Now I wanted to compare the RAM usage before and after the download. I expect the RAM usage to be increased by 600MB after the download. But the System Monitoring Tool showed me no changes at all. How can this be? Does tmpfs work other than I actually expect it to?

    Read the article

  • Better ways to have valuable data indexed, which is ignored currently

    - by Sam
    <a title="">.../a> Hi folks. It seems that my title tag which holds extremely valuable and describes contents on my simple design page is currently compeltely denied by search engines and not indexed at all!! Those descriptions should however be indexed as the describe valuable portions to an otherwise empty page with clean glossary (thats neat and organised to the eye of the viewer. So putting all that descriptive data into visible space would ruin the designish less is more fundamental... So, which alternatives to the title tag do I have, in order to put important contents that are relevant for both user as well as search engines? A <a name="">......</> B <p name="">......</> C <a alt="">.......</> D <p alt="">.......</> From the above list, arose my question: Which of the above is advisable alternative in order to get the valuable actual content indexed? Should it be in a a tag or p tag? Or are there even better tags for this which still keep layout clean? You suggestions are Much appreciated!

    Read the article

  • How to resolve "dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb"?

    - by raz7588
    Update Manager will not update although I have over 100 updates to do I get a error message like this: installArchives() failed: Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... Extracting templates from packages: 29%% Extracting templates from packages: 58%% Extracting templates from packages: 88%% Extracting templates from packages: 100%% Preconfiguring packages ... (Reading database ... (Reading database ... 5%% (Reading database ... 10%% (Reading database ... 15%% (Reading database ... 20%% (Reading database ... 25%% (Reading database ... 30%% (Reading database ... 35%% (Reading database ... 40%% (Reading database ... 45%% (Reading database ... 50%% (Reading database ... 55%% (Reading database ... 60%% (Reading database ... 65%% (Reading database ... 70%% (Reading database ... 75%% (Reading database ... 80%% (Reading database ... 85%% (Reading database ... 90%% (Reading database ... 95%% (Reading database ... 100%% (Reading database ... 189751 files and directories currently installed.) Preparing to replace python-problem-report 2.0.1-0ubuntu7 (using .../python-problem-report_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-apport 2.0.1-0ubuntu7 (using .../python-apport_2.0.1-0ubuntu9_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace apport 2.0.1-0ubuntu7 (using .../apport_2.0.1-0ubuntu9_all.deb) ... apport stop/waiting Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already apport start/running Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gnome-orca 3.4.1-0ubuntu0.1 (using .../gnome-orca_3.4.2-0ubuntu0.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace python-piston-mini-client 0.7.2-0ubuntu1 (using .../python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace oneconf 0.2.8 (using .../oneconf_0.2.8.1_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/oneconf_0.2.8.1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2 (using .../software-center_5.2.2.2_all.deb) ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Traceback (most recent call last): File "/usr/bin/pyclean", line 33, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error processing /var/cache/apt/archives/software-center_5.2.2.2_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Traceback (most recent call last): File "/usr/bin/pycompile", line 39, in <module> from debpython.namespace import add_namespace_files ValueError: bad marshal data (unknown type code) dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace libglade2-0 1:2.6.4-1ubuntu1 (using .../libglade2-0_1%%3a2.6.4-1ubuntu1.1_amd64.deb) ... Unpacking replacement libglade2-0 ... Preparing to replace libv4l-0 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_amd64.deb) ... De-configuring libv4l-0:i386 ... Unpacking replacement libv4l-0 ... Preparing to replace libv4l-0:i386 0.8.6-1ubuntu1 (using .../libv4l-0_0.8.6-1ubuntu2_i386.deb) ... Unpacking replacement libv4l-0:i386 ... Preparing to replace libv4lconvert0:i386 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_i386.deb) ... De-configuring libv4lconvert0 ... Unpacking replacement libv4lconvert0:i386 ... Preparing to replace libv4lconvert0 0.8.6-1ubuntu1 (using .../libv4lconvert0_0.8.6-1ubuntu2_amd64.deb) ... Unpacking replacement libv4lconvert0 ... Errors were encountered while processing: /var/cache/apt/archives/python-problem-report_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/python-apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/apport_2.0.1-0ubuntu9_all.deb /var/cache/apt/archives/gnome-orca_3.4.2-0ubuntu0.1_all.deb /var/cache/apt/archives/python-piston-mini-client_0.7.2+bzr57-0ubuntu1_all.deb /var/cache/apt/archives/oneconf_0.2.8.1_all.deb /var/cache/apt/archives/software-center_5.2.2.2_all.deb Error in function: SystemError: E:Sub-process /usr/bin/dpkg returned an error code (1) Setting up libglade2-0 (1:2.6.4-1ubuntu1.1) ... dpkg: error processing gnome-orca (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. dpkg: error processing python-problem-report (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4lconvert0 (0.8.6-1ubuntu2) ... Setting up libv4lconvert0:i386 (0.8.6-1ubuntu2) ... dpkg: error processing python-piston-mini-client (--configure): Package is in a very bad inconsistent state - you should reinstall it before attempting configuration. Setting up libv4l-0 (0.8.6-1ubuntu2) ... Setting up libv4l-0:i386 (0.8.6-1ubuntu2) ... dpkg: dependency problems prevent configuration of python-apport: python-apport depends on python-problem-report (>= 0.94); however: Package python-problem-report is not configured yet. dpkg: error processing python-apport (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of software-center: software-center depends on python-piston-mini-client (>= 0.1+bzr29); however: Package python-piston-mini-client is not configured yet. dpkg: error processing software-center (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of oneconf: oneconf depends on python-piston-mini-client (>= 0.3+bzr32-0ubuntu1); however: Package python-piston-mini-client is not configured yet. dpkg: error processing oneconf (--configure): dependency problems - leaving unconfigured dpkg: dependency problems prevent configuration of apport: apport depends on python-apport (>= 2.0.1-0ubuntu7); however: Package python-apport is not configured yet. dpkg: error processing apport (--configure): dependency problems - leaving unconfigured Processing triggers for libc-bin ... ldconfig deferred processing now taking place This has been going on for two weeks now and I cannot get any updates. Any help would be great.

    Read the article

  • Search in Projects API

    - by Geertjan
    Today I got some help from Jaroslav Havlin, the creator of the new "Search in Projects API". Below are the steps to create a search provider that finds recently modified files, via a new tab in the "Find in Projects" dialog: Here's how to get to the above result. Create a new NetBeans module project named "RecentlyModifiedFilesSearch". Then set dependencies on these libraries: Search in Projects API Lookup API Utilities API Dialogs API Datasystems API File System API Nodes API Create and register an implementation of "SearchProvider". This class tells the application the name of the provider and how it can be used. It should be registered via the @ServiceProvider annotation.Methods to implement: Method createPresenter creates a new object that is added to the "Find in Projects" dialog when it is opened. Method isReplaceSupported should return true if this provider support replacing, not only searching. If you want to disable the search provider (e.g., there aren't required external tools available in the OS), return false from isEnabled. Method getTitle returns a string that will be shown in the tab in the "Find in Projects" dialog. It can be localizable. Example file "org.netbeans.example.search.ExampleSearchProvider": package org.netbeans.example.search; import org.netbeans.spi.search.provider.SearchProvider; import org.netbeans.spi.search.provider.SearchProvider.Presenter; import org.openide.util.lookup.ServiceProvider; @ServiceProvider(service = SearchProvider.class) public class ExampleSearchProvider extends SearchProvider { @Override public Presenter createPresenter(boolean replaceMode) { return new ExampleSearchPresenter(this); } @Override public boolean isReplaceSupported() { return false; } @Override public boolean isEnabled() { return true; } @Override public String getTitle() { return "Recent Files Search"; } } Next, we need to create a SearchProvider.Presenter. This is an object that is passed to the "Find in Projects" dialog and contains a visual component to show in the dialog, together with some methods to interact with it.Methods to implement: Method getForm returns a JComponent that should contain controls for various search criteria. In the example below, we have controls for a file name pattern, search scope, and the age of files. Method isUsable is called by the dialog to check whether the Find button should be enabled or not. You can use NotificationLineSupport passed as its argument to set a display error, warning, or info message. Method composeSearch is used to apply the settings and prepare a search task. It returns a SearchComposition object, as shown below. Please note that the example uses ComponentUtils.adjustComboForFileName (and similar methods), that modifies a JComboBox component to act as a combo box for selection of file name pattern. These methods were designed to make working with components created in a GUI Builder comfortable. Remember to call fireChange whenever the value of any criteria changes. Example file "org.netbeans.example.search.ExampleSearchPresenter": package org.netbeans.example.search; import java.awt.FlowLayout; import javax.swing.BoxLayout; import javax.swing.JComboBox; import javax.swing.JComponent; import javax.swing.JLabel; import javax.swing.JPanel; import javax.swing.JSlider; import javax.swing.event.ChangeEvent; import javax.swing.event.ChangeListener; import org.netbeans.api.search.SearchScopeOptions; import org.netbeans.api.search.ui.ComponentUtils; import org.netbeans.api.search.ui.FileNameController; import org.netbeans.api.search.ui.ScopeController; import org.netbeans.api.search.ui.ScopeOptionsController; import org.netbeans.spi.search.provider.SearchComposition; import org.netbeans.spi.search.provider.SearchProvider; import org.openide.NotificationLineSupport; import org.openide.util.HelpCtx; public class ExampleSearchPresenter extends SearchProvider.Presenter { private JPanel panel = null; ScopeOptionsController scopeSettingsPanel; FileNameController fileNameComboBox; ScopeController scopeComboBox; ChangeListener changeListener; JSlider slider; public ExampleSearchPresenter(SearchProvider searchProvider) { super(searchProvider, false); } /** * Get UI component that can be added to the search dialog. */ @Override public synchronized JComponent getForm() { if (panel == null) { panel = new JPanel(); panel.setLayout(new BoxLayout(panel, BoxLayout.PAGE_AXIS)); JPanel row1 = new JPanel(new FlowLayout(FlowLayout.LEADING)); JPanel row2 = new JPanel(new FlowLayout(FlowLayout.LEADING)); JPanel row3 = new JPanel(new FlowLayout(FlowLayout.LEADING)); row1.add(new JLabel("Age in hours: ")); slider = new JSlider(1, 72); row1.add(slider); final JLabel hoursLabel = new JLabel(String.valueOf(slider.getValue())); row1.add(hoursLabel); row2.add(new JLabel("File name: ")); fileNameComboBox = ComponentUtils.adjustComboForFileName(new JComboBox()); row2.add(fileNameComboBox.getComponent()); scopeSettingsPanel = ComponentUtils.adjustPanelForOptions(new JPanel(), false, fileNameComboBox); row3.add(new JLabel("Scope: ")); scopeComboBox = ComponentUtils.adjustComboForScope(new JComboBox(), null); row3.add(scopeComboBox.getComponent()); panel.add(row1); panel.add(row3); panel.add(row2); panel.add(scopeSettingsPanel.getComponent()); initChangeListener(); slider.addChangeListener(new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { hoursLabel.setText(String.valueOf(slider.getValue())); } }); } return panel; } private void initChangeListener() { this.changeListener = new ChangeListener() { @Override public void stateChanged(ChangeEvent e) { fireChange(); } }; fileNameComboBox.addChangeListener(changeListener); scopeSettingsPanel.addChangeListener(changeListener); slider.addChangeListener(changeListener); } @Override public HelpCtx getHelpCtx() { return null; // Some help should be provided, omitted for simplicity. } /** * Create search composition for criteria specified in the form. */ @Override public SearchComposition<?> composeSearch() { SearchScopeOptions sso = scopeSettingsPanel.getSearchScopeOptions(); return new ExampleSearchComposition(sso, scopeComboBox.getSearchInfo(), slider.getValue(), this); } /** * Here we return always true, but could return false e.g. if file name * pattern is empty. */ @Override public boolean isUsable(NotificationLineSupport notifySupport) { return true; } } The last part of our search provider is the implementation of SearchComposition. This is a composition of various search parameters, the actual search algorithm, and the displayer that presents the results.Methods to implement: The most important method here is start, which performs the actual search. In this case, SearchInfo and SearchScopeOptions objects are used for traversing. These objects were provided by controllers of GUI components (in the presenter). When something interesting is found, it should be displayed (with SearchResultsDisplayer.addMatchingObject). Method getSearchResultsDisplayer should return the displayer associated with this composition. The displayer can be created by subclassing SearchResultsDisplayer class or simply by using the SearchResultsDisplayer.createDefault. Then you only need a helper object that can create nodes for found objects. Example file "org.netbeans.example.search.ExampleSearchComposition": package org.netbeans.example.search; public class ExampleSearchComposition extends SearchComposition<DataObject> { SearchScopeOptions searchScopeOptions; SearchInfo searchInfo; int oldInHours; SearchResultsDisplayer<DataObject> resultsDisplayer; private final Presenter presenter; AtomicBoolean terminated = new AtomicBoolean(false); public ExampleSearchComposition(SearchScopeOptions searchScopeOptions, SearchInfo searchInfo, int oldInHours, Presenter presenter) { this.searchScopeOptions = searchScopeOptions; this.searchInfo = searchInfo; this.oldInHours = oldInHours; this.presenter = presenter; } @Override public void start(SearchListener listener) { for (FileObject fo : searchInfo.getFilesToSearch( searchScopeOptions, listener, terminated)) { if (ageInHours(fo) < oldInHours) { try { DataObject dob = DataObject.find(fo); getSearchResultsDisplayer().addMatchingObject(dob); } catch (DataObjectNotFoundException ex) { listener.fileContentMatchingError(fo.getPath(), ex); } } } } @Override public void terminate() { terminated.set(true); } @Override public boolean isTerminated() { return terminated.get(); } /** * Use default displayer to show search results. */ @Override public synchronized SearchResultsDisplayer<DataObject> getSearchResultsDisplayer() { if (resultsDisplayer == null) { resultsDisplayer = createResultsDisplayer(); } return resultsDisplayer; } private SearchResultsDisplayer<DataObject> createResultsDisplayer() { /** * Object to transform matching objects to nodes. */ SearchResultsDisplayer.NodeDisplayer<DataObject> nd = new SearchResultsDisplayer.NodeDisplayer<DataObject>() { @Override public org.openide.nodes.Node matchToNode( final DataObject match) { return new FilterNode(match.getNodeDelegate()) { @Override public String getDisplayName() { return super.getDisplayName() + " (" + ageInMinutes(match.getPrimaryFile()) + " minutes old)"; } }; } }; return SearchResultsDisplayer.createDefault(nd, this, presenter, "less than " + oldInHours + " hours old"); } private static long ageInMinutes(FileObject fo) { long fileDate = fo.lastModified().getTime(); long now = System.currentTimeMillis(); return (now - fileDate) / 60000; } private static long ageInHours(FileObject fo) { return ageInMinutes(fo) / 60; } } Run the module, select a node in the Projects window, press Ctrl-F, and you'll see the "Find in Projects" dialog has two tabs, the second is the one you provided above:

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >