Search Results

Search found 58655 results on 2347 pages for 'dojo data'.

Page 603/2347 | < Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • JTable.setRowHeight prevents me from adding more rows

    - by Brent Parker
    I'm working on a pretty simple Java app in order to learn more about JTables, TableModels, and custom cell renderers. The table is a simple table with 8 columns only with text in them. When you click on an "add" button, a dialog pops up and lets you enter the data for the columns. Now to my problem. One of the columns (the last one) should allow for multiple lines of text. I'm already putting HTML into the field, but it is not wrapping. I did some research and looked into JTable#setRowHeight(). However, once I use setRowHeight, I can no longer add rows to the table. The data is put into the table model, but it does not show in the table. If I remove the setRowHeight line, then it adds data just fine. Is there another step to adding data to my data model that I'm missing? Thanks a lot!

    Read the article

  • Highstock set different colors for different markers

    - by user1651804
    I am tryting to develop a chart in Highstock where each marker got an individual color. When i push my Data in an Array like this : series.data.push([ parseInt(Math.round(myDate.getTime())), parseInt(Math.round(myDate.getHours())) ]); The Data is shown correctly, but when i try to set the color within each data-object like this, it doesn't work to show points. series.data.push([ { x: parseInt(Math.round(myDate.getTime())), y: parseInt(Math.round(myDate.getHours())), marker:{fillColor: 'red'}} ]); What am i missing? Update: When i inspect it with firebug i can see that the series is set correctly within the chart object. so why does it now show up? :/

    Read the article

  • google url shortener api and jquery not working

    - by rahim
    i cant seem to get google's new url shortener api to work with jquery's post method: $(document).ready(function() { $.post("https://www.googleapis.com/urlshortener/v1/url", { longUrl: "http://www.google.com/"}, function(data){ console.log("data" + data); }); $('body').ajaxError(function(e, xhr, settings, exception) { $(this).text('fail'+e); console.log(exception); }); }); all of this gives me an empty (data) response AND an empty (exception) response. any ideas? ive also tried this with no success: $.ajax({ type: 'POST', url: "https://www.googleapis.com/urlshortener/v1/url", data: { longUrl: "http://www.google.com/"}, success: success, dataType: "jsonp" });

    Read the article

  • Get correct output from UTF-8 stored in VarChar using Entity Framework or Linq2SQL?

    - by jasonpenny
    Borland StarTeam seems to store its data as UTF-8 encoded data in VarChar fields. I have an ASP.NET MVC site that returns some custom HTML reports using the StarTeam database, and I would like to find a better solution for getting the correct data, for a rewrite with MVC2. I tried a few things with Encoding GetBytes and GetString, but I couldn't get it to work (we use mostly Delphi at work); then I figured out a T-SQL function to return a NVarChar from UTF-8 stored in a VarChar, and created new SQL views which return the data as NVarChar, but it's slow. The actual problem appears like this: “description†instead of “description”, in both SSMS and in a webpage when using Linq2SQL Is there a way to get the proper data out of these fields using Entity Framework or Linq2SQL?

    Read the article

  • Financial Management: Why Move to the Cloud?

    - by Kathryn Perry
    A guest post by Terrance Wampler, Vice President, Financials Product Strategy, Oracle I’ve spent my career designing and developing financial management systems, most of it at Oracle. Every single day I either meet with our customers or talk to them on the phone. The time is usually spent discussing various business challenges facing CFOs and Controllers, who are running Oracle’s Financials. Lately, we’ve been talking a lot about cloud computing and whether it makes sense for finance to go to the cloud. Here are some pros and cons that might help you make that decision. Let’s start with the benefits of cloud solutions. The first is savings. With cloud services, you pay only for those commodities that you use. That makes you feel like you're getting better value for your money. Plus, you can preserve your cash for your core business and you can get a better matching of expenses and revenues. So, at the top of the list is lower total cost of ownership. The second point has to do with optimization. With cloud services, you’ll need less IT infrastructure so you can optimize your IT resources for better-value, higher-end projects. This also leads to greater financial visibility, where there's a clear cost for the set of services or features replaced by cloud services. And, the last benefit is what I call acceleration. You can save money by speeding up the initialization and deployment of the project. You don't have to deal with IT infrastructure and you can start implementing right away. We did a quick survey of about 70 CFOs at the CFO Summit last month in New York City. We asked them why they were looking at cloud services, and not necessarily just for financials. The No. 1 response was perceived lower cost of ownership. But of course there are risks to consider. The first thing most people think about in the cloud is security and ownership of data. So, will your data really be safe? Can you meet your own privacy policy requirements? Do you really want your private financial data exposed? Do you trust the provider? Is what you see really your data? Do you own it or is it managed by someone else? Security is a big concern that comes with an emotional component. The next thing in the risk category is reliability. Is the provider proven? You’re taking what you have control over – for example, standards and policies and internal service level agreements – away from your IT department and giving it to someone else. Will you still be able to adapt to shifts in your business? Will the provider be able to grow with your business effectively? Reliability means having a provider that can give you the service infrastructure that you need. And then there’s performance, which has two components in terms of risk. Going forward, will the provider be able to scale the infrastructure or service level if you have new employees or new businesses? And second, will the price you negotiate and the rate you lock in cover additional costs and rising service fees? Another piece is cost. What happens if you don't get the service level you want? What if you end the service? What happens, if after a few years, you send the service out for bid and change service? Can you move your data? Can you move the applications? Do the integrations work? These are cost components people don’t always take into account. And, the final piece is the business case. The perception is that you can get started really quickly with cloud. It has a perceived lower cost of total ownership and it feels cool because it's cloud. But do you have a good business case for moving to the cloud? Your total cost of ownership is over three years; then you’ll renew it, so your TCO is six years. Have you compared that to other internal services that you’re offering? You might already have product that you can run this new business or division on. In that same survey at the CFO Summit, the execs thought the biggest perceived risks were security of data, ability to move data back, and the ability to create a business case to actually justify the risks. So that’s the list of pros and cons. Not to leave you hanging, I will do another post on how to balance these pros and cons and make the right decision for your business.

    Read the article

  • Mathematica - Import CSV and process columns?

    - by Casey
    I have a CSV file that is formatted like: 0.0023709,8.5752e-007,4.847e-008 and I would like to import it into Mathematica and then have each column separated into a list so I can do some math on the selected column. I know I can import the data with: Import["data.csv"] then I can separate the columns with this: StringSplit[data[[1, 1]], ","] which gives: {"0.0023709", "8.5752e-007", "4.847e-008"} The problem now is that I don't know how to get the data into individual lists and also Mathematica does not accept scientific notation in the form 8.5e-007. Any help in how to break the data into columns and format the scientific notation would be great. Thanks in advance.

    Read the article

  • SSIS web service task parsing result.

    - by dbengals
    I have an ssis (2005) package that uses the web service task to download to a file destination. The file contains a string of xml data. After downloaded the file looks like this. <?xml version="1.0" encoding="utf-16"?> <string>--here is XML data with escaped characters--</string> My thought was I could then use the XML source data flow source to pull the <string> data, but when I set this up the XML source will not read the <string> as a column. It will generate an xsd and it seems normal, but no luck seeing the column. Any ideas on getting this to work? Or would there be a better way to pull the data within the file generated from the web service? Thanks.

    Read the article

  • Interaction between Java and Android

    - by Grasper
    I am currently trying to research how to use Android with an existing java based system. Basically, I need to communicate to/from an Android application. The system currently passes object data from computer to computer using ActiveMQ as the JMS provider. On one of the computers is a display which shows object data to the user. What we want to do now is use a phone (running Android) as another option to show this object data to a user with wifi/network access. Ideally we would like to have a native application on the Android that would listen to the ActiveMQ topic and publish to another Topic and read/write/display the object data, but from some research I have done, I am not sure if this is possible. What are some other ways to approach this problem? The android Phone needs to be able to send/receive data. I have been using the AndroidEmulator for testing.

    Read the article

  • Is OO design's strength in semantics or encapsulation?

    - by Phil H
    Object-oriented design (OOD) combines data and its methods. This, as far as I can see, achieves two great things: it provides encapsulation (so I don't care what data there is, only how I get values I want) and semantics (it relates the data together with names, and its methods consistently use the data as originally intended). So where does OOD's strength lie? In constrast, functional programming attributes the richness to the verbs rather than the nouns, and so both encapsulation and semantics are provided by the methods rather than the data structures. I work with a system that is on the functional end of the spectrum, and continually long for the semantics and encapsulation of OO. But I can see that OO's encapsulation can be a barrier to flexible extension of an object. So at the moment, I can see the semantics as a greater strength. Or is encapsulation the key to all worthwhile code?

    Read the article

  • Issue with JSON and jQuery

    - by Jason N. Gaylord
    I'm calling a web service and returning the following data in JSON format: ["OrderNumber":"12345","CustomerId":"555"] In my web service success method, I'm trying to parse both: $.ajax({ type: "POST", url: "MyService.asmx/ServiceName", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { var data = msg.d; var rtn = ""; $.each(data, function(list) { rtn = rtn + this.OrderNumber + ", " + this.CustomerId + "<br/>"; } rtn = rtn + "<br/>" + data; $("#test").html(rtn); } }); but I'm getting a bunch of "undefined, undefined" rows followed by the correct JSON string. Any idea why? I've tried using the eval() method but that didn't help as I got some error message talking about ']' being expected.

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • How the kernel gives seg. fault for a scenario like this?

    - by bala1486
    I have a doubt in accessing some invalid data. How will the OS cause segmentation fault for a scenario like this? Suppose a date segment has some 100 bytes. This will be mapped and a page table entry will be created. But the page size is 4K. Consider the data segment is aligned with this page boundary. So at first consider accessing a valid data within the 100 bytes. So now the page table entry is in TLB. Next if you try to access some invalid data between the 100 and 4K, the entry is there in page table and will it be allowed to access the invalid data??? Thanks, Bala

    Read the article

  • Reused UIWebView showing previous loaded content for a brief second on iPhone

    - by Roi
    In one of my apps I reuse a webview. Each time the user enters a certain view on reload cached data to the webview using the method - (void)loadData:(NSData *)data MIMEType:(NSString *)MIMEType textEncodingName:(NSString *)encodingName baseURL:(NSURL *)baseURL and I wait for the callback call - (void) webViewDidFinishLoad:(UIWebView *)webView. In the mean time I hide the webview and show a 'loading' label. Only when I receive webViewDidFinishLoad do I show the webview. Many times what happens is I see the previous data that was loaded to the webview for a brief second before the new data I loaded kicks in. I already added a delay of 0.2 seconds before showing the webview but it didn't help. Instead of solving this by adding more time to the delay does anyone know how to solve this issue or maybe clear old data from a webview without release and allocating it every time?

    Read the article

  • Android serialization: ImageView

    - by embo
    I have a simple class: public class Ball2 extends ImageView implements Serializable { public Ball2(Context context) { super(context); } } Serialization ok: private void saveState() throws IOException { ObjectOutputStream oos = new ObjectOutputStream(openFileOutput("data", MODE_PRIVATE)); try { Ball2 data = new Ball2(Game2.this); oos.writeObject(data); oos.flush(); } catch (Exception e) { Log.e("write error", e.getMessage(), e); } finally { oos.close(); } } But deserealization private void loadState() throws IOException { ObjectInputStream ois = new ObjectInputStream(openFileInput("data")); try { Ball2 data = (Ball2) ois.readObject(); } catch (Exception e) { Log.e("read error", e.getMessage(), e); } finally { ois.close(); } } fail with error: 03-24 21:52:43.305: ERROR/read error(1948): java.io.InvalidClassException: android.widget.ImageView; IllegalAccessException How deserialize object correctly?

    Read the article

  • What happens to date-times and booleans when using DbLinq with SQLite?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm imaging a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not so bad. If you have any experience with this or a similar scenario, what are your "lessons learned"?

    Read the article

  • Updating a TableView with a WebService and Saving to CoreData

    - by jcady
    I am working on a project where I have a table view that is currently updated via a web request that returns XML. I implemented -(int)numberOfRowsInTableView:(NSTableView*)tv and -(id)tableView:(NSTableView *)tv objectValueForTableColumn:(NSTableColumn*)tableColumn row:(int)row in my XML parsing class, and have the table updated with the data that is pulled down from the server. I want to save the data that is pulled down using Core Data, so that the table can be saved/loaded. Then later on application start when the web request is made, it will only add data that is not already present. (The XML is sorted by release date, so later I will check to see which release dates are not loaded up from the Core Data store, and only load newer entries.) How would I go about implementing this? I am a very new Cocoa developer, but have gone through the entire Hillegass book. Thanks so much.

    Read the article

  • Connecting to multiple firebird Databases via Delphi

    - by Branden
    I am integrating a system with 2 other applications, 1 using a Firebird database whilst the other BIS (using ADO). My delphi application uses Firebird. I need to read data from my database, insert it into both the BIS database and the other application firebird database. I have created seperate data modules for each. Sending data to the ADO works fine, but when writing to the other Firebird DB (my db still open) I get strange errors. I have managed to isolate the problem to the second firebird DB. Small data writes seems fine. The data structures are completly different, so un able to use a synch tool. is there a way to overcome this by using multi threading or seperate memory space each Firebird instance uses?

    Read the article

  • How to retain headers for all the pages of an exported pdf in php?

    - by udaya
    Hi I am exporting data from php page to pdf when the datas exceeed the page limit the header is not available for the consecutive pages function where i call the export to pdf is function changeDetails() { $bType = $this-input-post('textvalue'); if($bType == "pdf") { $this->load->library('table'); $this->load->plugin('to_pdf'); $data['countrytoword'] = $this->AddEditmodel1->export(); $this->table->set_heading('Country','State','Town','Name'); $out = $this->table->generate($data['countrytoword']); $html = $this->load->view( 'newpdf',$data, true); pdf_create($html, $cur_date); } } This is my view page from which i export data to pdf Name Country State Town Here I am getting the result as page:1 Name country State Town udaya india Tamilnadu kovai chandru srilanka columbo aaaaa page:2 vivek england gggkj gjgjkj in the page 2 i dont get the headers name, country ,state and town

    Read the article

  • Abort a slow flush to disk after write?

    - by Therealstubot
    Is there a way to abort a python write operation in such a way that the OS doesn't feel it's necessary to flush the unwritten data to the disc? I'm writing data to a USB device, typically many megabytes. I'm using 4096 bytes as my block size on the write, but it appears that Linux caches up a bunch of data early on, and write it out to the USB device slowly. If at some point during the write, my user decides to cancel, I want the app to just stop writing immediately. I can see that there's a delay between when the data stops flowing from the application, and the USB activity light stops blinking. Several seconds, up to about 10 seconds typically. I find that the app is holding in the close() method, I'm assuming, waiting for the OS to finish writing the buffered data. I call flush() after every write, but that doesn't appear to have any impact on the delay. I've scoured the python docs for an answer but have found nothing.

    Read the article

  • Crystal report - Conversion from text to decimal ?

    - by 123rmn
    I have a crystal report, which takes data from an XML template. For a particular field of report, say 'Cost' the database stored procedure send data to XSD file in decimal format , but when the crystal report displays data picking from XSD, it is rounded off. When i right click on other data fields of report, I can see 'Field:table1.columnname',. But when i click on 'Cost' field, it shows 'Text:'. To my understanding, this is a text field which is mapped to pick data from XSD and since the type is text, it gives result in text hence truncating the decimal. Please suggest how can I get decimals here. P.S: This code was created by someone else, so i have no idea on what they had set at that time. I have to fix it and i have no clue about it.

    Read the article

  • Would an ORM have any way of determining that a SQLite column contains date-times or booleans?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • A controller problem using a base CRUD model

    - by rkj
    In CodeIgniter I'm using a base CRUD My_model, but I have this small problem in my browse-controller.. My $data['posts'] gets all posts from the table called "posts". Though the author in that table is just a user_id, which is why I need to use my "getusername" function (gets the username from a ID - the ID) to grab the username from the users table. Though I don't know how to proceed from here, since it is not just one post. Therefore I need the username to either be a part of the $data['posts'] array or some other smart solution. Anyone who can help me out? function index() { $this->load->model('browse_model'); $data['posts'] = $this->browse_model->get_all(); $data['user'] = $this->browse_model->getusername(XX); $this->load->view('header'); $this->load->view('browse/index', $data); $this->load->view('footer'); }

    Read the article

  • Zend_Form validation problem

    - by GrumpyCanuck
    I am having problems getting validation to work for a form built using Zend_Form. The idea is this: I have two dropdown. One is a list of players. The other is a list of free agents who play the same position as the player. I am using an onChange javascript callback to run some Ajax code that replaces the free agent list dropdown with a new one at the position of the player they've selected from the player dropdown. Now, perhaps this is the wrong way, but I built the form by creating an instance of Zend_Form and then creating all these setX methods that add elements to the form. My reasoning was that I wanted to display certain elements in specific places on the page, not just output $this-form on my template. The problem appears to be when I get the form post back, the validator seems to not know about the validation rule I set up for the free agent drop down. Here's some relevant code to look at. I'm a relative ZF n00b so feel free to tell me I am not doing things the ZF way if it leaps out at you. The action in the controller: public function indexAction() { if ($this->getRequest()->isPost()) { $form = new Baseball_Form_Transactions(); if ($form->isValid($this->_request->getPost())) { $data = $this->_request->getPost(); $leagueInfo = Doctrine::getTable('League')->findOneByShortName($data['shortLeagueName'])->toArray(); // Create the request top drop an existing player $transactionInfo = array( 'league_id' => $leagueInfo['id'], 'team_id' => $data['teamId'], 'player_id' => $data['players'], 'type' => 'drop', 'target_team_id' => 0, 'transaction_date' => date('Y-m-d H:m:s') ); $transaction = new Transaction(); $transaction->fromArray($transactionInfo); $transaction->save(); // Now we do the request to add a player $transactionInfo['team_id'] = 0; $transactionInfo['player_id'] = $data['freeAgents']; $transactionInfo['target_team_id'] = $data['teamId']; $transactionInfo['type'] = 'add'; $transaction = new Transaction(); $transaction->fromArray($transactionInfo); $transaction->save(); $this->_flashMessenger->addMessage('Added transaction'); } } $options = array( 'teamId' => $this->teamId, 'position' => 'C', 'leagueShortName' => $this->league ); $this->transactionForm->setMyPlayers($options); $this->transactionForm->setFreeAgents($options); $this->transactionForm->setTeamId($options); $this->transactionForm->setShortLeagueName($options); $this->view->transactionForm = $this->transactionForm; $this->view->messages = $this->_flashMessenger->getMessages(); $transaction = new Transaction(); $this->view->transactions = $transaction->byTeam($options); } Next we have the form itself public function setMyPlayers($options) { $data = Doctrine::getTable('Team')->find($options['teamId']); $players = array(); foreach ($data->Players->toArray() as $player) { $players[$player['id']] = "{$player['position']} - {$player['first_name']} {$player['last_name']}"; } $playersSelect = new Zend_Form_Element_Select( 'players', array( 'required' => true, 'label' => 'Players', 'multiOptions' => $players, ) ); $this->addElement($playersSelect); } public function setFreeAgents($options) { $q = Doctrine_Query::create() ->select('CONCAT(p.first_name, " ", p.last_name) as full_name, p.id, p.position') ->from('Player p') ->leftJoin('p.Teams t') ->leftJoin('t.League l ON l.short_name = ?', $options['leagueShortName']) ->where('t.id IS NULL') ->andWhere('p.position = ?', $options['position']) ->orderBy('p.last_name'); $q->setHydrationMode(Doctrine_Core::HYDRATE_ARRAY); $data = $q->execute(); $freeAgents = array(); foreach ($data as $player) { $freeAgents[$player['id']] = $player['full_name']; } $freeAgentsSelect = new Zend_Form_Element_Select( 'freeAgents', array( 'label' => 'Free Agents', 'multiOptions' => $freeAgents, 'size' => 15 ) ); $freeAgentsSelect->setRequired(true); $this->addElement($freeAgentsSelect); } public function setShortLeagueName($options) { $shortLeagueNameHidden = new Zend_Form_Element_Hidden( 'shortLeagueName', array('value' => $options['leagueShortName']) ); $this->addElement($shortLeagueNameHidden); } public function setTeamId($options) { $teamIdHidden = new Zend_Form_Element_Hidden( 'teamId', array('value' => $options['teamId']) ); $this->addElement($teamIdHidden); } There is no init or __construct() method in the form. My problem seems simple enough: reject the form contents as invalid if they have not selected someone from the free agent list. Right now, it sails through as valid. I've spent some considerable time searching online for an answer, and haven't been able to find it. Thanks in advance for any help.

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

< Previous Page | 599 600 601 602 603 604 605 606 607 608 609 610  | Next Page >