Search Results

Search found 12511 results on 501 pages for 'itunes store'.

Page 433/501 | < Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >

  • iPhone -trouble with a loading data from webservice into a tableview

    - by medampudi
    I am using a Window based application and then loading up my initial navigationview based controller. After loading it if the user is not registered/ does not have a credentials present then it takes the user to a login view controller . loginViewController *sampleView = [[loginViewController alloc] initWithNibName:@"loginViewController" bundle:nil]; [self.navigationController presentModalViewController:sampleView animated:YES]; [sampleView release]; then right after that i try to load the table with data that i get from a webservice using asiHTTP .. for this question lets say it takes 3 seconds time to get the data and then deserialize it . now... my question is it works out okey in the later runs as I store the username and password in a seure location... but in the first instance.... i am not able to get the data to laod to the tableview... I have tried a lot of things... 1. Initially the data fetch methods was in a diffrent methods.. so i thought that might be the problem as then moved it the same place as the tbleviewController(navigationController) 2. I event put in the Reload data at the end of the functionality for the data parsing and deserialization... nothing happens. 3. i did not understand the concept of @property and alll.... 4. The screen is black screen with nothing displayed on it for a good 5 seconds in the consecutive launches of the app.... so could we have something like a MBPorgressHUD implemented for the same. could any one please help for these scenarios and guidance as to what paths to take from here...

    Read the article

  • Best practices for encrytping continuous/small UDP data

    - by temp
    Hello everyone, I am having an application where I have to send several small data per second through the network using UDP. The application need to send the data in real-time (on waiting). I want to encrypt these data and insure that what I am doing is as secure as possible. Since I am using UDP, there is no way to use SSL/TLS, so I have to encrypt each packet alone since the protocol is connectionless/unreliable/unregulated. Right now, I am using a 128-bit key derived from a passphrase from the user, and AES in CBC mode (PBE using AES-CBC). I decided to use a random salt with the passphrase to derive the 128-bit key (prevent dictionary attack on the passphrase), and of course use IVs (to prevent statistical analysis for packets). However I am concerned about few things: Each packet contains small amount of data (like a couple of integer values per packet) which will make the encrypted packets vulnerable to known-plaintext attacks (which will result in making it easier to crack the key). Also, since the encryption key is derived from a passphrase, this will make the key space way less (I know the salt will help, but I have to send the salt through the network once and anyone can get it). Given these two things, anyone can sniff and store the sent data, and try to crack the key. Although this process might take some time, once the key is cracked all the stored data will be decrypted, which will be a real problem for my application. So my question is, what is the best practices for sending/encrypting continuous small data using a connectionless protocol (UDP)? Is my way the best way to do it? ...flowed? ...Overkill? ... Please note that I am not asking for a 100% secure solution, as there is no such thing. Cheers

    Read the article

  • JavaScript: List global variables in IE

    - by Quandary
    I'm trying to get the instance name of my class. The way I do this is I loop through all global objects and compare it with the this pointer. It works in Chrome and FF, but in IE, it doesn't. The problem seems to be the global variables don't seem to be in window. How can I loop through the global variables in IE ? PS: I know it only works as long as there is only one instance, and I don't want to pass the instance's name as a parameter. function myClass() { this.myName = function () { // search through the global object for a name that resolves to this object for (var name in this.global) { if (this.global[name] == this) return name } } } function myClass_chrome() { this.myName = function () { // search through the global object for a name that resolves to this object for (var name in window) { if (window[name] == this) return name ; } } ; } // store the global object, which can be referred to as this at the top level, in a // property on our prototype, so we can refer to it in our object's methods myClass.prototype.global = this //myClass_IE.prototype.global = this // create a global variable referring to an object // var myVar = new myClass() var myVar = new myClass_chrome() //var myVar = new myClass_IE() alert(myVar.myName() );// returns "myVar"

    Read the article

  • Strongly typed dynamic Linq sorting

    - by David
    I'm trying to build some code for dynamically sorting a Linq IQueryable<. The obvious way is here, which sorts a list using a string for the field name http://dvanderboom.wordpress.com/2008/12/19/dynamically-composing-linq-orderby-clauses/ However I want one change - compile time checking of field names, and the ability to use refactoring/Find All References to support later maintenance. That means I want to define the fields as f=f.Name, instead of as strings. For my specific use I want to encapsulate some code that would decide which of a list of named "OrderBy" expressions should be used based on user input, without writing different code every time. Here is the gist of what I've written: var list = from m Movies select m; // Get our list var sorter = list.GetSorter(...); // Pass in some global user settings object sorter.AddSort("NAME", m=m.Name); sorter.AddSort("YEAR", m=m.Year).ThenBy(m=m.Year); list = sorter.GetSortedList(); ... public class Sorter ... public static Sorter GetSorter(this IQueryable source, ...) The GetSortedList function determines which of the named sorts to use, which results in a List object, where each FieldData contains the MethodInfo and Type values of the fields passed in AddSort: public SorterItem AddSort(Func field) { MethodInfo ... = field.Method; Type ... = TypeOf(TKey); // Create item, add item to diction, add fields to item's List // The item has the ThenBy method, which just adds another field to the List } I'm not sure if there is a way to store the entire field object in a way that would allow it be returned later (it would be impossible to cast, since it is a generic type) Is there a way I could adapt the sample code, or come up with entirely new code, in order to sort using strongly typed field names after they have been stored in some container and retrieved (losing any generic type casting)

    Read the article

  • Initializing AngularJS service factory style

    - by wisemanIV
    I have a service that retrieves data via REST. I want to store the resulting data in service level variable for use in multiple controllers. When I put all the REST logic directly into controllers everything works fine but when I attempt to move the retrieval / storing of data into a service the controller is not being updated when the data comes back. I've tried lots of different ways of maintain the binding between service and controller. Controller: myApp.controller('SiteConfigCtrl', ['$scope', '$rootScope', '$route', 'SiteConfigService', function ($scope, $rootScope, $route, SiteConfigService) { $scope.init = function() { console.log("SiteConfigCtrl init"); $scope.site = SiteConfigService.getConfig(); } } ]); Service: myApp.factory('SiteConfigService', ['$http', '$rootScope', '$timeout', 'RESTService', function ($http, $rootScope, $timeout, RESTService) { var siteConfig = {} ; RESTService.get("https://domain/incentiveconfig", function(data) { siteConfig = data; }); return { getConfig:function () { console.debug("SiteConfigService getConfig:"); console.debug(siteConfig); return siteConfig; } }; } ]);

    Read the article

  • How can I prevent double file uploading with Amazon S3?

    - by Tony
    I decided to use Amazon S3 for document storage for an app I am creating. One issue I run into is while I need to upload the files to S3, I need to create a document object in my app so my users can perform CRUD actions. One solution is to allow for a double upload. A user uploads a document to the server my Rails app lives on. I validate and create the object, then pass it on to S3. One issue with this is progress indicators become more complicated. Using most out-of-the-box plugins would show the client that file has finished uploading because it is on my server, but then there would be a decent delay when the file was going from my server to S3. This also introduces unnecessary bandwidth (at least it does not seem necessary) The other solution I am thinking about is to upload the file directly to S3 with one AJAX request, and when that is successful, make a second AJAX request to store the object in my database. One issue here is that I would have to validate the file after it is uploaded which means I have to run some clean up code in S3 if the validation fails. Both seem equally messy. Does anyone have something more elegant working that they would not mind sharing? I would imagine this is a common situation with "cloud storage" being quite popular today. Maybe I am looking at this wrong.

    Read the article

  • If attacker has original data, and encrypted data, can they determine the passphrase?

    - by Brad Cupit
    If an attacker has several distinct items (for example: e-mail addresses) and knows the encrypted value of each item, can the attacker more easily determine the secret passphrase used to encrypt those items? Meaning, can they determine the passphrase without resorting to brute force? This question may sound strange, so let me provide a use-case: User signs up to a site with their e-mail address Server sends that e-mail address a confirmation URL (for example: https://my.app.com/confirmEmailAddress/bill%40yahoo.com) Attacker can guess the confirmation URL and therefore can sign up with someone else's e-mail address, and 'confirm' it without ever having to sign in to that person's e-mail account and see the confirmation URL. This is a problem. Instead of sending the e-mail address plain text in the URL, we'll send it encrypted by a secret passphrase. (I know the attacker could still intercept the e-mail sent by the server, since e-mail are plain text, but bear with me here.) If an attacker then signs up with multiple free e-mail accounts and sees multiple URLs, each with the corresponding encrypted e-mail address, could the attacker more easily determine the passphrase used for encryption? Alternative Solution I could instead send a random number or one-way hash of their e-mail address (plus random salt). This eliminates storing the secret passphrase, but it means I need to store that random number/hash in the database. The original approach above does not require this extra table. I'm leaning towards the the one-way hash + extra table solution, but I still would like to know the answer: does having multiple unencrypted e-mail addresses and their encrypted counterparts make it easier to determine the passphrase used?

    Read the article

  • persist values/variables from page to page

    - by Todd
    Hello, I'm wondering if there's another solution to my problem, that's considered more the Sharepoint way. FIrstly, my site is an Internet site, not Intranet. The problem is, all I'm trying to do is save values/variables from page to page in Sharepoint. I know the issue with Session Variables, but this seems to be the only way I can see to accomplish this. I know there are webparts that can store this value, but am I wrong in thinking this won't be persisted from page to page? Basically, I'll be extending the Content Query Web Part to dynamically filter it's results based off of a variable/value. The user chooses their 'area' from a dropdown, and the CQWP in the site will change and query results based off of this value (It will be a provincial structure as it is a Canadian site, so if someone chooses the province 'Ontario', this value is saved in a global variable, and these extended CQWP that are throughout the site, will get this value, and query lists flagged as Ontario). Is Session variables the only solution? Thanks everyone!

    Read the article

  • Reporting Services 2005 - Parameter reliant on cascading parameters

    - by sHr0oMaN
    Good day I have the following: In a SSRS 2005 report I have three report parameters: FinancialPeriodType ("Month" or "Week" in a DropDownList), FinancialPeriod (cascading DropDownList populated depending on first selection) and another parameter, OpeningBalance, of type float. The first two parameters are cascading i.e. the first parameter is used by the query populating the second's available values. This works fine. What I'm attemping to do is default the value of OpeningBalance to a value from a dataset populated by a stored procedure which takes in the first two parameters. However as soon as I select a value for the first parameter, I get the following error: An error occurred during report processing. The value for the report parameter 'OpeningBalance' is not valid for its type.' I've tried setting the default of the second parameter to be a meaningful default (something like 200901) as well as defaulting the second parameter in the SQL store procedure with no affect. Using SQL Profiler I've noticed that selecting a value for the first parameter doesn't even execute the SQL used to obtain available values for the second parameter.

    Read the article

  • Re-usable Obj-C classes with custom values: The right way

    - by Prairiedogg
    I'm trying to reuse a group of Obj-C clases between iPhone applications. The values that differ from app to app have been isolated and I'm trying to figure out the best way to apply these custom values to the classes on an app-to-app basis. Should I hold them in code? // I might have 10 customizable values for each class, that's a long signature! CarController *controller = [[CarController alloc] initWithFontName:@"Vroom" engine:@"Diesel" color:@"Red" number:11]; Should I store them in a big settings.plist? // Wasteful! I sometimes only use 2-3 of 50 settings! AllMyAppSettings *settings = [[AllMyAppSettings alloc] initFromDisk:@"settings.plist"]; MyCustomController *controller = [[MyCustomController alloc] initWithSettings:settings]; [settings release]; Should I have little, optional n_settings.plists for each class? // Sometimes I customize CarControllerSettings *carSettings = [[CarControllerSettings alloc] initFromDisk:@"car_settings.plist"]; CarController *controller = [[CarController alloc] initWithSettings:carSettings]; [carSettings release]; // Sometimes I don't, and CarController falls back to internally stored, reasonable defaults. CarController *controller = [[CarController alloc] initWithSettings:nil]; Or is there an OO solution that I'm not thinking of at all that would be better?

    Read the article

  • can I add properties to a typo3 extbase domain model object?

    - by The Newbie Qs
    I want to store a username in a coupon object, each coupon object already has the uid of the user who created it. I can loop over the coupon objects and read the associated usernames from fe_users but how then will I save those usernames into the coupons so when they are passed to the template the usernames can be read like so coupon.username, or in some other easy way so each username will appear on the page with the right coupon as they are all printed out in a table? If I was doing basic php instead of typo3 i would just define a query but what is the typo3 v4.5 way? My code so far - which dies on the line where I try to assign the new property --creatorname -- to the $coup object. public function listAction() { $coupons = $this->couponRepository->findAll(); // @var Tx_Extbase_Domain_Repository_FrontendUserRepository $userRepository */ $userRepository = $this->objectManager->get("Tx_Extbase_Domain_Repository_FrontendUserRepository"); foreach ($coupons as $coup) { echo '<br />test '.$coup->getCreator(); echo '<br />count = '.$userRepository->countAll().'<br />'; $newObject = $userRepository->findByUid( intval($coup->getCreator())); //var_dump($newObject); var_dump($coup); echo '<br />getUsername '.$newObject->getUsername() ; $coup['creatorname'] = $newObject->getUsername(); echo '<br />creatorname '.$coup['creatorname'] ; } $this->view->assign('coupons', $coupons); }

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Void* array casting to float, int32, int16, etc.

    - by Griffin
    Hey guys, I've got an array of PCM data, it could be 16 bit, 24 bit packed, 32 bit, etc.. It could be signed, or unsigned, and it could be 32 or 64 bit floating point. It is currently stored as a "void**" matrix, indexed by channel, then by frame. The goal is to allow my library to take in any PCM format and buffer it, without requiring manipulation of the data to fit a designated structure. If the A/D converter spits out 24 bit packed arrays of interleaved PCM, I need to accept it gracefully. I also need to support 16 bit non interleaved, as well as any permutation of the above formats. I know the bit depth and other information at runtime, and I'm trying to code efficiently while not duplicating code. What I need is an effective way to cast the matrix, put PCM data into the matrix, and then pull it out later. I can cast the matrix to int32_t, or int16_t for the 32 and 16 bit signed PCM respectively, I'll probably have to store the 24 bit PCM in an int32_t for 32 bit, 8 bit byte systems as well. Can anyone recommend a good way to put data into this array, and pull it out later? I'd like to avoid large sections of code which look like: switch( mFormat ) { case 1: // unsigned 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (uint8_t*)pcm[i]; break; case 2: // signed 8 bit for( int i = 0; i < mChannels; i++ ) framesArray = (int8_t*)pcm[i]; break; case 3: // unsigned 16 bit ... Limitations: I'm working in C/C++, no templates, no RTTI, no STL. Think embedded. Things get trickier when I have to port this to a DSP with 16 bit bytes. Does anybody have any useful macros they might be willing to share? Thanks, -Griff

    Read the article

  • Should I base my Embedded Linux product on Qt?

    - by Udi
    My company is developing a medical product. One of the components is a pda-like platform that will run embedded linux. We were considering Qt as the UI framework but found out that Qt is a lot more than that (we are not familiar with Qt). In general, the device needs to do the following: 1. Receive measurements over USB HID from another device (USB HID is used for convenience). 2. Process the measurements. 3. Store them in a database. 4. Interact with the user using the device's touch screen lcd. 5. Communicate (wi-fi, tcp-ip) with a central management station that collects the data and configures the device. 6. Include a web server to allow accessing the device via a browser. We intend to program in C++. My questions are: 1. Is that a good choice for such a device? 2. Assuming we choose Qt, how do we build our product? - Do we use Qt just as a GUI framework and write the application code in a separate process (passing messages between Qt and the application process)? - Do we write the entire application inside Qt, using all of the services the tool has to offer? - Another approach?

    Read the article

  • Have something loaded only when JList item is visibile

    - by elvencode
    Hello, i'm implementing a Jlist populated with a lot of elements. Each element corresponds to a image so i'd like to show a resized preview of them inside each row of the list. I've implemented a custom ImageCellRenderer extending the Jlabel and on getListCellRendererComponent i create the thumbnail if there'snt any for that element. Each row corresponds to a Page class where i store the path of the image and the icon applied to the JLabel. Each Page object is put inside a DefaultListModel to populate the JList. The render code is something like this: public Component getListCellRendererComponent( JList list, Object value, int index, boolean isSelected, boolean cellHasFocus) { Page page = (Page) value; if (page.getImgIcon() == null) { System.out.println(String.format("Creating thumbnail of %s", page.getImgFilename())); ImageIcon icon = new ImageIcon(page.getImgFilename()); int thumb_width = icon.getIconWidth() > icon.getIconHeight() ? 128 : ((icon.getIconWidth() * 128) / icon.getIconHeight()); int thumb_height = icon.getIconHeight() > icon.getIconWidth() ? 128 : ((icon.getIconHeight() * 128) / icon.getIconWidth()); icon.setImage(getScaledImage(icon.getImage(), thumb_width, thumb_height)); page.setImgIcon(icon); } setIcon(page.getImgIcon()); } I was thinking that only a certain item is visibile in the List the cell renderer is called but i'm seeing that all the thumnails are created when i add the Page object to the list model. I've tried to load the items and after set the model in the JList or set the model first and after starting appending the items but the results are the same. Is there any way to load the data only when necessary or do i need to create a custom control like a JScrollPanel with stacked items inside where i check myself the visibility of each elements? Thanks

    Read the article

  • Querying the size of a drive on a remote server is giving me an error

    - by Testifier
    Here's what I have: public static bool DriveHasLessThanTenPercentFreeSpace(string server) { long driveSize = 0; long freeSpace = 0; var oConn = new ConnectionOptions {Username = "username", Password = Settings.Default.SQLServerAdminPassword}; var scope = new ManagementScope("\\\\" + server + "\\root\\CIMV2", oConn); //connect to the scope scope.Connect(); //construct the query var query = new ObjectQuery("SELECT FreeSpace FROM Win32_LogicalDisk where DeviceID = 'D:'"); //this class retrieves the collection of objects returned by the query var searcher = new ManagementObjectSearcher(scope, query); //this is similar to using a data adapter to fill a data set - use an instance of the ManagementObjectSearcher to get the collection from the query and store it ManagementObjectCollection queryCollection = searcher.Get(); //iterate through the object collection and get what you're looking for - in my case I want the FreeSpace of the D drive foreach (ManagementObject m in queryCollection) { //the FreeSpace value is in bytes freeSpace = Convert.ToInt64(m["FreeSpace"]); //error happens here! driveSize = Convert.ToInt64(m["Size"]); } long percentFree = ((freeSpace / driveSize) * 100); if (percentFree < 10) { return true; } return false; } This line of code is giving me an error: driveSize = Convert.ToInt64(m["Size"]); The error says: ManagementException was unhandled by user code Not found I'm assuming the query to get the drive size is wrong. Please note that I AM getting the freeSpace value at the line: freeSpace = Convert.ToInt64(m["FreeSpace"]); So I know the query IS working for freeSpace. Can anyone give me a hand?

    Read the article

  • unexpected behaviour of object stored in web service Session

    - by draconis
    Hi. I'm using Session variables inside a web service to maintain state between successive method calls by an external application called QBWC. I set this up by decorating my web service methods with this attribute: [WebMethod(EnableSession = true)] I'm using the Session variable to store an instance of a custom object called QueueManager. The QueueManager has a property called ChangeQueue which looks like this: [Serializable] public class QueueManager { ... public Queue<QBChange> ChangeQueue { get; set; } ... where QBChange is a custom business object belonging to my web service. Now, every time I get a call to a method in my web service, I use this code to retrieve my QueueManager object and access my queue: QueueManager qm = (QueueManager)Session[ticket]; then I remove an object from the queue, using qm.dequeue() and then I save the modified query manager object (modified because it contains one less object in the queue) back to the Session variable, like so: Session[ticket] = qm; ready for the next web service method call using the same ticket. Now here's the thing: if I comment out this last line //Session[ticket] = qm; , then the web service behaves exactly the same way, reducing the size of the queue between method calls. Now why is that? The web service seems to be updating a class contained in serialized form in a Session variable without being asked to. Why would it do that? When I deserialize my Queuemanager object, does the qm variable hold a reference to the serialized object inside the Session[ticket] variable?? This seems very unlikely.

    Read the article

  • Memory Leak in returning NSMutableArray from class

    - by Structurer
    Hi I am quite new to Objective C for the iPhone, so I hope you wont kill me for asking a simple question. I have made an App that works fine, except that Instruments reports memory leaks from the class below. I use it to store settings from one class and then retrieve them from another class. These settings are stored on a file so they can be retrieved every time the App is ran. What can I do do release the "setting" and is there anything that can be done to call (use) the class in a smarter way? Thanks ----- Below is Settings.m ----- import "Settings.h" @implementation Settings @synthesize settings; -(NSString *)dataFilePath // Return path for settingfile, including filename { NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; return [documentsDirectory stringByAppendingPathComponent:kUserSettingsFileName]; } -(NSMutableArray *)getParameters // Return settings from disk after checking if file exist (if not create with default values) { NSString *filePath = [self dataFilePath]; if ([[NSFileManager defaultManager] fileExistsAtPath:filePath]) // Getting data from file { settings = [[NSMutableArray alloc] initWithContentsOfFile:filePath]; } else // Creating default settings { settings = [[NSMutableArray alloc] initWithObjects: [NSNumber numberWithInteger:50], [NSNumber numberWithInteger:50], nil]; [settings writeToFile:[self dataFilePath] atomically:YES]; } return settings; } ----- Below is my other class from where I call my Settings class ----- // Get settings from file Settings *aSetting = [[Settings alloc] init]; mySettings = [aSetting getParameters]; [aSetting release];

    Read the article

  • Android - Take a photo, save it in app drawables and display it in an ImageButton

    - by Andres7X
    I have an Android app with an ImageButton. When user clicks on it, intent launches to show camera activity. When user capture the image, I'd like to save it in drawable folder of the app and display it in the same ImageButton clicked by the user, replacing the previous drawable image. I used the activity posted here: Capture Image from Camera and Display in Activity ...but when I capture an image, activity doesn't return to activity which contains ImageButton. Edit code is: public void manage_shop() { static final int CAMERA_REQUEST = 1888; [...] ImageView photo = (ImageView)findViewById(R.id.getimg); photo.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { Intent camera = new Intent(android.provider.MediaStore.ACTION_IMAGE_CAPTURE); startActivityForResult(camera, CAMERA_REQUEST); } }); [...] } And onActivityResult(): protected void onActivityResult(int requestCode, int resultCode, Intent data) { ImageButton getimage = (ImageButton)findViewById(R.id.getimg); if (requestCode == CAMERA_REQUEST && resultCode == RESULT_OK) { Bitmap getphoto = (Bitmap) data.getExtras().get("data"); getimage.setImageBitmap(getphoto); } } How can I also store the captured image in drawable folder?

    Read the article

  • Excel VBA creating a new column with formula

    - by Amatya
    I have an excel file with a column which has date data. I want the user to input a date of their choosing and then I want to create a new column that lists the difference in days between the two dates. The Macro that I have is working but I have a few questions and I would like to make it better. Link to MWE small data file is here. The user input date was 9/30/2013, which I stored in H20 Macro: Sub Date_play() Dim x As Date Dim x2 As Date Dim y As Variant x = InputBox(Prompt:="Please enter the Folder Report Date. The following formats are acceptable: 4 1 2013 or April 1 2013 or 4/1/2013") x2 = Range("E2") y = DateDiff("D", x2, x) MsgBox y 'Used DateDiff above and it works but I don't know how to use it to fill a column or indeed a cell. Range("H20").FormulaR1C1 = x Range("H1").FormulaR1C1 = "Diff" Range("H2").Formula = "=DATEDIF(E2,$H$20,""D"")" Range("H2").AutoFill Destination:=Range("H2:H17") Range("H2:H17").Select End Sub Now, could I have done this without storing the user input date in a particular cell? I would've preferred to use the variable "x" in the formula but it wasn't working for me. I had to store the user input in H20 and then use $H$20. What's the difference between the function Datedif and the procedure DateDiff? I am able to use the procedure DateDiff in my macro but I don't know how to use it to fill out my column. Is one method better than the other? Is there a better way to add columns to the existing sheet, where the columns include some calculations involving existing data on the sheet and some user inputs? There are tons of more complicated calculations I want to do next. Thanks

    Read the article

  • architecture - centraled location for different modules (cms, webapplications, ...) - best practise

    - by NicoJuicy
    Let's just say that i want to create a cms + other online applications. I want them all to integrate into a central location, but they also have to be available seperately (not everyone want's more than the cms solution). Would i create a huge central application that contains all the database, which communicates through a webserice with the "standalone - integrated" modules? Or would i create them seperately and the only thing that the "central" application would do is syncing the information (eg. the cms and another solution can have the same tables (eg. clients or employees). Or do you have another idea? (i know i'm a little vague, but i can't "give" a lot of details because of work - contract). If someone has all the "packages" it should be possible for the central application to integrate all the modules at one place! Or if someone has more than 1 module, it should combine this on the website. What i thought is best, is that the central location contains only the users and their rights (eg. cms - all rights, ...), and the information get synced with every change. (module cms, adding a new client - store locally and send data to the central location, central location - send to modules = table clients updated everywhere) This way it is easy if someone only "bought" a module, they can sync it easily through the complete architecture. I hope i made myself clear!

    Read the article

  • How to marshal an object and its content (also objects)

    - by Waldo Spek
    I have a question for which I suspect the answer is a bit complex. At this moment I am programming a DLL (class library) in C#. This DLL uses a 3rd party library and therefore deals with 3rd party objects of which I do not have the source code. Now I am planning to create another DLL, which is going to be used in a later stadium in my application. This second DLL should use the 3rd party objects (with corresponding object states) created by the first DLL. Luckily the 3rd party objects extend the MarshalByRefObject class. I can marshal the objects using System.Runtime.Remoting.Marshal(...). I then serialize the objects using a BinaryFormatter and store the objects as a byte[] array. All goes well. I can deserialize and unmarshal in a the opposite way and end up with my original 3rd party objects...so it appears... Nevertheless, when calling methods on my 3rd party deserialized objects I get object internal exceptions. Normally these methods return other 3rd party objects, but (obviously - I guess) now these objects are missing because they weren't serialized. Now my global question: how would I go about marshalling/serializing all the objects which my 3rd party objects reference...and cascade down the "reference tree" to obtain a full and complete serialized object? Right now my guess is to preprocess: obtain all the objects and build my own custom object and serialize it. But I'm hoping there is some other way...

    Read the article

  • Greasemonkey failing to GM_setValue()

    - by HonoredMule
    I have a Greasemonkey script that uses a Javascript object to maintain some stored objects. It covers quite a large volume of information, but substantially less than it successfully stored and retrieved prior to encountering my problem. One value refuses to save, and I can not for the life of me determine why. The following problem code: Works for other larger objects being maintained. Is presently handling a smaller total amount of data than previously worked. Is not colliding with any function or other object definitions. Can (optionally) successfully save the problem storage key as "{}" during code startup. this.save = function(table) { var tables = this.tables; if(table) tables = [table]; for(i in tables) { logger.log(this[tables[i]]); logger.log(JSON.stringify(this[tables[i]])); GM_setValue(tables[i] + "_" + this.user, JSON.stringify(this[tables[i]])); logger.log(tables[i] + "_" + this.user + " updated"); logger.log(GM_getValue(tables[i] + "_" + this.user)); } } The problem is consistently reproducible and the logging statments produce the following output in Firebug: Object { 54,10 = Object } // Expansion shows complete contents as expected, but there is one oddity--Firebug highlights the array keys in purple instead of the usual black for anonymous objects. {"54,10":{"x":54,"y":10,"name":"Lucky Pheasant"}} // The correctly parsed string. bookmarks_HonoredMule saved undefined I have tried altering the format of the object keys, to no effect. Further narrowing down the issue is that this particular value is successfully saved as an empty object ("{}") during code initialization, but skipping that also does not help. Reloading the page confirms that saving of the nonempty object truly failed. Any idea what could cause this behavior? I've thoroughly explored the possibility of hitting size constraints, but it doesn't appear that can be the problem--as previously mentioned, I've already reduced storage usage. Other larger objects save still, and the total number of objects, which was not high already, has further been reduced by an amount greater than the quantity of data I'm attempting to store here.

    Read the article

  • Display and Use Advanced Find for Product Subscriptions by Account in Microsoft CRM 4.0

    - by Chris
    By default when viewing an account in edit mode you have access to Opportunities, Invoices, and Quotes which contain the products being shopped by the account and/or the sales department. I'm trying to determine where to store, display, and use the products that an account has a subscription too. I may not understand the implementation but it seems that there should be "Products" option directly off the root Account management window that will show the user all the products the account has purchased. We are trying to integrate this with our production tracking system where product sales can originate from other channels that will not flow through CRM first. This product subscription does not fit into the Opportunity, Quote, or Invoice model because they are confirmed recurring sales that were automatically purchased via tools like a Public Website, Portal, etc. By enabling this tracking in CRM we can use the advanced find feature to facilitate follow up sales and marketing efforts. Example: Find everyone who is subscribed to model A, so we can notify them of a new holiday campaign where they can get 10% off on all add-ons. It's my assumption that this is a common scenario, however I'd like to better understand how to approach this within the world of Microsoft CRM. Thank you in advance.

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

< Previous Page | 429 430 431 432 433 434 435 436 437 438 439 440  | Next Page >