Search Results

Search found 4247 results on 170 pages for 'mark unwin'.

Page 69/170 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • SaaS Multi-tenancy Applications: How is data import/export/backup being implemented?

    - by Mark Redman
    How are applications providing import / export (or backups) of data in SaaS based multi-tenancy applications, particularly single database designs? Imports: Keeping things simple I think basic imports are useful, ie CSV to a spec (or a way of providing a mapping between CSV columns and fields in the database. Exports: In single database designs I have seen XML exports and HTML (basic sitse generated) exports of data? I would assume that XML is a better option? How does one cater for relational data? Would you reference various things within XML and provide documentation of the relationships or let users figurethis out? Are vendors providing an export/backup that can be imported back in/restored? Your comments appreciated.

    Read the article

  • Is it possible to create a service like Feed My Inbox on my own server?

    - by Mark Bowen
    I was just wondering if it's at all possible to create a service like Feed My Inbox on my own server using PHP? Basically I have a site which has RSS feeds which are dynamic in nature and can search from thousands of posts based on many different criteria. I have the RSS feed working fine and bringing back data dynamically for whatever criteria I want so that bits fine. I am using the ExpressionEngine CMS to handle the site and there will be thousands of users on the site (currently there are around 2,0000) but that number is exponentially growing every single day. What I want to be able to do is allow the users to choose from certain criteria which will then build a dynamic RSS URL which will then be stored in a database table (one row for each user). This bit I will be able to do myself but then I want to be able to send out new RSS feed items via e-mail to each user. This is the part I'm a little stuck on. I'm guessing I would somehow need to run a cron job to hit a page which would check each users RSS feed and then if there are new items to send them to the user via e-mail. That's where I am totally stuck though and I'm just wondering what the best way to go about it would be? That or any software in PHP that already does this sort of thing would be great. I tried out phpList but it has severe problems working with RSS and I only ever got it to work once and now never again and I've read that lots of people have had this same problem so unfortunately it's not just me :-( I know there are services such as Feed My Inbox which I could easily set up so that users click a link and their RSS feed URL is added to go and use that service but I want to keep users from seeing the dynamic nature of the feed or they will easily be able to modify it to get at other items in the feed. I need this so that I can charge for access to the feeds but if people can see the URL of the feed then I will be totally unstuck as they will be able to get at whatever they want very easily. Therefore I'd like to be able to send the items out to them. Would really love to hear if anyone knows if this kind of thing is possible at all and what would be involved?

    Read the article

  • change/part doesn't work as expected with parse

    - by Rebol Tutorial
    According to http://www.rebol.com/docs/core23/rebolcore-15.html You can use change/part to parse and replace but that doesn't work well with this where I just try to replace the block <mytag > ... </mytag> by "MyString" content: {<mytag id="a" 111111111111111> </mytag> aaaaaaaaaaaaaaa aaaaaaaaaaaaaaa <mytag id="b" 22222222222222222> </mytag> <mytag id="c" 3333333333333> </mytag> aaaaaaaaaaaaaaa aaaaaaaaaaaaaaa <mytag id="d" 444444444444444> </mytag> } mytag: [ to {<mytag} start: ( ) thru {<mytag} to {id="} thru {id="} copy ID to {"} thru {"} to {</mytag>} thru {</mytag>} ending: (change/part start "mystring" ending) mark: ( write clipboard:// mark input) ] rule: [any mytag to end] parse content rule

    Read the article

  • Can't get ellipsis to work on Android

    - by Mark
    Hi, I have a TextView. I want it to ellipsize if longer than its available width. This does not work unless the input string has no spaces... can anyone provide an example of this working? I've tried different combinations of: singleLine="true" maxLines="1" scrollHorizontally="false" none of these have any effect. Again, if I supply a string that has no spaces in it, then the ellipsis appears correctly. What am I missing? I've tried this on 1.5, 1.6, 2.0, all same problem. Thanks

    Read the article

  • Can I use PayPal to charge a Credit Card automatically?

    - by Mark
    If I have a Visa card number saved in my database, is there a way I can charge that Visa automatically through the PayPal API without the user having to enter anything? We want to keep this site as easy and hassle-free to use as possible. It would be a variable amount, based on how they use the site. (Don't worry, proper disclaimers will be in place, and the user will be notified) What about these "recurring payments"? That way I don't have to store the CC info on my website, but do they allow variable amounts that I could periodically send to PayPal?

    Read the article

  • Does it matter where you get your CS degree

    - by Mark Lubin
    Does going to a less famous University that might not be terribly selective necessarily preclude someone from being considered from the elite software companies, i.e. Google or Microsoft regardless of my actual abilities? Furthermore how often do you find your alumni places a factor when looking for a job? Thanks again for the responses.

    Read the article

  • LInq to SQL - Partial Class - C#

    - by Mark Comix
    Hi, I have a system with 2 different projects, one is called LINQ_Extensions and the other is ORM_Linq. On ORM_Linq i have the LINQ diagram with the SQL tables "converted" in clases. One of the Class is called "Tipos_Pago" In the other project i have another class (partial class) "Tipos_Pago". I want to use the method OnValidate to validate the properties include in the class "Tipos_Pago", so i create this partial class. In the 2 projects i put the same NameSpace "ORM_Linq"(I changed the NameSpace of the project "LINQ_Extensions" to have the same of the project "ORM_Linq") After those chages, Visual Studio give me this error: Error 1 No defining declaration found for implementing declaration of partial method 'ORM_Linq.Tipos_Pago.OnValidate(System.Data.Linq.ChangeAction)' C..\Tipos_Pago.cs 13 22 Extensiones_Linq I don't have any Idea of what happend, can someone help me? Thanks, sorry for my poor english This is the code in the partial class: namespace ORM_Linq { public partial class Tipos_Pago { partial void OnValidate(System.Data.Linq.ChangeAction action) { //Valid code } } }

    Read the article

  • ASP.NET MVC: How does one add authentication to RSS Feeds?

    - by Mark Redman
    I have seen a few examples of how to create RSS feeds using ASP.NET MVC, either by creating an Action or through an HttpHandler. I need to authenticate feeds and am wondering how this is to be done (and supported by RSS readers rather than just browsing to the page/xml through a browser) and how would authentications differ between an MVC Action or HttpHandler?

    Read the article

  • Rows and Column of Excel File

    - by Mark
    It is possible to write a code that specifying the rows and column of spread sheet in terms of NUMBERS and NOT LIKE (B2:D6) Example: excelSheet.Range("B2:D6").Interior.Color = RGB(100, 100, 255) instead of B2 and D6 I want to write 5 rows and 3 column.. It is posible to write in vb.net 2003 code?

    Read the article

  • Using XmlDiffPatch when writing to stream

    - by Mark Smith
    I am trying to use xmldiffpatch when comparing two Xmls(one from a stream, the other from a file) and writing the diff patch to a stream. The first method is to write my xml to a memory stream. The second method loads an xml from a file and creates a stream for the patched file to be written into. The third method actually compares the two files and writes the third. The xmldiff.Compare(originalFile, finalFile, dgw); method takes (XmlReader, XmlReader, XmlWriter). I'm always getting that both files are identical, even though they are not, so I know that I am missing something. Any help is appreciated! public MemoryStream FirstXml() { string[] names = { "John", "Mohammed", "Marc", "Tamara", "joy" }; MemoryStream ms = new MemoryStream(); XmlTextWriter xtw= new XmlTextWriter(ms, Encoding.UTF8); xtw.WriteStartDocument(); xtw.WriteStartElement("root"); foreach (string s in names) { xtw.WriteStartElement(s); xtw.WriteEndElement(); } xtw.WriteEndElement(); xtw.WriteEndDocument(); return ms; } public Stream SecondXml() { XmlReader finalFile =XmlReader.Create(@"c:\......\something.xml"); MemoryStream ms = FirstXml(); XmlReader originalFile = XmlReader.Create(ms); MemoryStream ms2 = new MemoryStream(); XmlTextWriter dgw = new XmlTextWriter(ms2, Encoding.UTF8); GenerateDiffGram(originalFile, finalFile, dgw); return ms2; } public void GenerateDiffGram(XmlReader originalFile, XmlReader finalFile, XmlWriter dgw) { XmlDiff xmldiff = new XmlDiff(); bool bIdentical = xmldiff.Compare(originalFile, finalFile, dgw); dgw.Close(); StreamReader sr = new StreamReader(SecondXml()); string xmlOutput = sr.ReadToEnd(); if(xmlOutput.Contains("</xd:xmldiff>")) {Console.WriteLine("Xml files are not identical"); Console.Read();} else {Console.WriteLine("Xml files are identical");Console.Read();} }

    Read the article

  • Why is Grails Searchable Plugin causing errors on Hibernate AutoFlush?

    - by Mark Rogers
    In the Grails 1.2.5 project that I am trying to troubleshoot, we use the Grails Searchable plugin .5.5.1. The problem is that whenever we attempt to index large sets domain classes, Grails keeps throwing: ERROR hibernate.AssertionFailure - an assertion failure occured (this may indicate a bug in Hibernate, but is more likely due to unsafe use of the session) org.hibernate.AssertionFailure: collection [domain-class] was not processed by flush() But the domain classes involved have been mapped and used by hibernate without issues outside of the calls to searchable plugin. The use of the searchable plugin goes as follows: Create a compass session with compass.openSession() Begin compass transaction: compassSession.beginTransaction() Then compassSession.create(result.get(0)) is called on an important unindexed domain class Finally compassTransaction.commit() is called to commit the transaction. Goto 2 and process next domain class Between the 3 and 4th Domain class, an autoflush is triggered that throws the error. Can anyone give me any hints about how to solve this problem? Has anyone encountered this problem before? I know that they had a systemic issue with this back in pre .5 versions of the searchable-plugin. Is it possible those issues weren't totally fixed?

    Read the article

  • How do I query delegation properties of an active directory user account?

    - by Mark J Miller
    I am writing a utility to audit the configuration of a WCF service. In order to properly pass credentials from the client, thru the WCF service back to the SQL back end the domain account used to run the service must be configured in Active Directory with the setting "Trust this user for delegation" (Properties - "Delegation" tab). Using C#, how do I access the settings on this tab in Active Directory. I've spent the last 5 hours trying to track this down on the web and can't seem to find it. Here's what I've done so far: using (Domain domain = Domain.GetCurrentDomain()) { Console.WriteLine(domain.Name); // get domain "dev" from MSSQLSERVER service account DirectoryEntry ouDn = new DirectoryEntry("LDAP://CN=Users,dc=dev,dc=mydomain,dc=lcl"); DirectorySearcher search = new DirectorySearcher(ouDn); // get sAMAccountName "dev.services" from MSSQLSERVER service account search.Filter = "(sAMAccountName=dev.services)"; search.PropertiesToLoad.Add("displayName"); search.PropertiesToLoad.Add("userAccountControl"); SearchResult result = search.FindOne(); if (result != null) { Console.WriteLine(result.Properties["displayName"][0]); DirectoryEntry entry = result.GetDirectoryEntry(); int userAccountControlFlags = (int)entry.Properties["userAccountControl"].Value; if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_FOR_DELEGATION) Console.WriteLine("TRUSTED_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) == (int)UserAccountControl.TRUSTED_TO_AUTH_FOR_DELEGATION) Console.WriteLine("TRUSTED_TO_AUTH_FOR_DELEGATION"); else if ((userAccountControlFlags & (int)UserAccountControl.NOT_DELEGATED) == (int)UserAccountControl.NOT_DELEGATED) Console.WriteLine("NOT_DELEGATED"); foreach (PropertyValueCollection pvc in entry.Properties) { Console.WriteLine(pvc.PropertyName); for (int i = 0; i < pvc.Count; i++) { Console.WriteLine("\t{0}", pvc[i]); } } } } The "userAccountControl" does not seem to be the correct property. I think it is tied to the "Account Options" section on the "Account" tab, which is not what we're looking for but this is the closest I've gotten so far. The justification for all this is: We do not have permission to setup the service in QA or in Production, so along with our written instructions (which are notoriously only followed in partial) I am creating a tool that will audit the setup (WCF and SQL) to determine if the setup is correct. This will allow the person deploying the service to run this utility and verify everything is setup correctly - saving us hours of headaches and reducing downtime during deployment.

    Read the article

  • What garbage collection algorithms do all 5 major browsers use?

    - by Martin Wittemann
    I am currently rethinking the object dispose handling of the qooxdoo JavaScript framework. Have a look at the following diagram (A is currently in scope): Let's say we want to delete B. Generally, we cut all reference between all objects. This means we cut connection 1 to 5 in the example. Is this really necessary? As far as I have read hear 1, browsers use the mark-and-sweep algorithm. In that case, we just need to cut reference 1 (connection to the scope) and 5 (connection to the DOM) which could be much faster. But can I be sure that all browsers use the mark-and-sweep algorithm or something similar? 1 http://stackoverflow.com/questions/864516/what-is-javascript-garbage-collection

    Read the article

  • iPhone Tab Bar application crash

    - by Mark Szymanski
    I have an application that uses a tab bar and whenever it launches it crashes and gives me the following error and stack trace: 2010-04-22 16:15:03.390 iCrushCans[59858:207] *** Terminating app due to uncaught exception 'NSUnknownKeyException', reason: '[<UIWindow 0x3e051a0> setValue:forUndefinedKey:]: this class is not key value coding-compliant for the key rootViewController.' 2010-04-22 16:15:03.392 iCrushCans[59858:207] Stack: ( 29680731, 2425423113, 29839809, 305768, 304309, 2957847, 4641908, 29583663, 4636459, 4644727, 2805842, 2844630, 2833204, 2815615, 2842721, 37776729, 29465472, 29461576, 2809365, 2846639 ) Thanks in advance!

    Read the article

  • Using <input type=file /> with J2EE/MySQL Backend

    - by Mark Hazlett
    Hey everyone, I'm wondering how I can hook up an input type=file to send a picture back to a backend servlet that will eventually be stored in a MySQL database as a BLOB? In other words, how can I upload a picture using the input and send that back to the servlet to insert into the database as a BLOB type? Thanks

    Read the article

  • ASP.NET MVC: What is the correct way to redirect to pages/actions in MVC?

    - by Mark Redman
    I am fairly new to MVC but not sure exactly which Redirect... replaces the standard redirect used in WebForms ie the standard Response.Redirect() For instance, I need to redirect to other pages in a couple of scenarios: 1) WHen the user logs out (Forms signout in Action) I want to redirect to a login page 2) In a Controller or base Controller event eg Initialze, I want to redirect to another page (AbsoluteRootUrl + Controller + Action) It seems that multiple redirects get called in some cases which causes errors, something to do with the fact a page is already being redirected? How can cancel the current request and just redirect?

    Read the article

  • Is there a reason why SSIS significantly slows down after a few minutes?

    - by Mark
    I'm running a fairly substantial SSIS package against SQL 2008 - and I'm getting the same results both in my dev environment (Win7-x64 + SQL-x64-Developer) and the production environment (Server 2008 x64 + SQL Std x64). The symptom is that initial data loading screams at between 50K - 500K records per second, but after a few minutes the speed drops off dramatically and eventually crawls embarrasingly slowly. The database is in Simple recovery model, the target tables are empty, and all of the prerequisites for minimally logged bulk inserts are being met. The data flow is a simple load from a RAW input file to a schema-matched table (i.e. no complex transforms of data, no sorting, no lookups, no SCDs, etc.) The problem has the following qualities and resiliences: Problem persists no matter what the target table is. RAM usage is lowish (45%) - there's plenty of spare RAM available for SSIS buffers or SQL Server to use. Perfmon shows buffers are not spooling, disk response times are normal, disk availability is high. CPU usage is low (hovers around 25% shared between sqlserver.exe and DtsDebugHost.exe) Disk activity primarily on TempDB.mdf, but I/O is very low (< 600 Kb/s) OLE DB destination and SQL Server Destination both exhibit this problem. To sum it up, I expect either disk, CPU or RAM to be exhausted before the package slows down, but instead its as if the SSIS package is taking an afternoon nap. SQL server remains responsive to other queries, and I can't find any performance counters or logged events that betray the cause of the problem. I'll gratefully reward any reasonable answers / suggestions.

    Read the article

  • Dispatcher.CurrentDispatcher?

    - by Mark
    If I do this... public PriorityQueue(Dispatcher dispatcher = null) { this.dispatcher = dispatcher ?? Dispatcher.CurrentDispatcher; } And then use it in a ViewModel (without passing any args) that is created through the XAML, this.dispatcher will point to the UI thread right?

    Read the article

  • Optimizing python code performance when importing zipped csv to a mongo collection

    - by mark
    I need to import a zipped csv into a mongo collection, but there is a catch - every record contains a timestamp in Pacific Time, which must be converted to the local time corresponding to the (longitude,latitude) pair found in the same record. The code looks like so: def read_csv_zip(path, timezones): with ZipFile(path) as z, z.open(z.namelist()[0]) as input: csv_rows = csv.reader(input) header = csv_rows.next() check,converters = get_aux_stuff(header) for csv_row in csv_rows: if check(csv_row): row = { converter[0]:converter[1](value) for converter, value in zip(converters, csv_row) if allow_field(converter) } ts = row['ts'] lng, lat = row['loc'] found_tz_entry = timezones.find_one(SON({'loc': {'$within': {'$box': [[lng-tz_lookup_radius, lat-tz_lookup_radius],[lng+tz_lookup_radius, lat+tz_lookup_radius]]}}})) if found_tz_entry: tz_name = found_tz_entry['tz'] local_ts = ts.astimezone(timezone(tz_name)).replace(tzinfo=None) row['tz'] = tz_name else: local_ts = (ts.astimezone(utc) + timedelta(hours = int(lng/15))).replace(tzinfo = None) row['local_ts'] = local_ts yield row def insert_documents(collection, source, batch_size): while True: items = list(itertools.islice(source, batch_size)) if len(items) == 0: break; try: collection.insert(items) except: for item in items: try: collection.insert(item) except Exception as exc: print("Failed to insert record {0} - {1}".format(item['_id'], exc)) def main(zip_path): with Connection() as connection: data = connection.mydb.data timezones = connection.timezones.data insert_documents(data, read_csv_zip(zip_path, timezones), 1000) The code proceeds as follows: Every record read from the csv is checked and converted to a dictionary, where some fields may be skipped, some titles be renamed (from those appearing in the csv header), some values may be converted (to datetime, to integers, to floats. etc ...) For each record read from the csv, a lookup is made into the timezones collection to map the record location to the respective time zone. If the mapping is successful - that timezone is used to convert the record timestamp (pacific time) to the respective local timestamp. If no mapping is found - a rough approximation is calculated. The timezones collection is appropriately indexed, of course - calling explain() confirms it. The process is slow. Naturally, having to query the timezones collection for every record kills the performance. I am looking for advises on how to improve it. Thanks. EDIT The timezones collection contains 8176040 records, each containing four values: > db.data.findOne() { "_id" : 3038814, "loc" : [ 1.48333, 42.5 ], "tz" : "Europe/Andorra" } EDIT2 OK, I have compiled a release build of http://toblerity.github.com/rtree/ and configured the rtree package. Then I have created an rtree dat/idx pair of files corresponding to my timezones collection. So, instead of calling collection.find_one I call index.intersection. Surprisingly, not only there is no improvement, but it works even more slowly now! May be rtree could be fine tuned to load the entire dat/idx pair into RAM (704M), but I do not know how to do it. Until then, it is not an alternative. In general, I think the solution should involve parallelization of the task. EDIT3 Profile output when using collection.find_one: >>> p.sort_stats('cumulative').print_stats(10) Tue Apr 10 14:28:39 2012 ImportDataIntoMongo.profile 64549590 function calls (64549180 primitive calls) in 1231.257 seconds Ordered by: cumulative time List reduced from 730 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.012 0.012 1231.257 1231.257 ImportDataIntoMongo.py:1(<module>) 1 0.001 0.001 1230.959 1230.959 ImportDataIntoMongo.py:187(main) 1 853.558 853.558 853.558 853.558 {raw_input} 1 0.598 0.598 370.510 370.510 ImportDataIntoMongo.py:165(insert_documents) 343407 9.965 0.000 359.034 0.001 ImportDataIntoMongo.py:137(read_csv_zip) 343408 2.927 0.000 287.035 0.001 c:\python27\lib\site-packages\pymongo\collection.py:489(find_one) 343408 1.842 0.000 274.803 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:699(next) 343408 2.542 0.000 271.212 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:644(_refresh) 343408 4.512 0.000 253.673 0.001 c:\python27\lib\site-packages\pymongo\cursor.py:605(__send_message) 343408 0.971 0.000 242.078 0.001 c:\python27\lib\site-packages\pymongo\connection.py:871(_send_message_with_response) Profile output when using index.intersection: >>> p.sort_stats('cumulative').print_stats(10) Wed Apr 11 16:21:31 2012 ImportDataIntoMongo.profile 41542960 function calls (41542536 primitive calls) in 2889.164 seconds Ordered by: cumulative time List reduced from 778 to 10 due to restriction <10> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.028 0.028 2889.164 2889.164 ImportDataIntoMongo.py:1(<module>) 1 0.017 0.017 2888.679 2888.679 ImportDataIntoMongo.py:202(main) 1 2365.526 2365.526 2365.526 2365.526 {raw_input} 1 0.766 0.766 502.817 502.817 ImportDataIntoMongo.py:180(insert_documents) 343407 9.147 0.000 491.433 0.001 ImportDataIntoMongo.py:152(read_csv_zip) 343406 0.571 0.000 391.394 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:384(intersection) 343406 379.957 0.001 390.824 0.001 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:435(_intersection_obj) 686513 22.616 0.000 38.705 0.000 c:\python27\lib\site-packages\rtree-0.7.0-py2.7.egg\rtree\index.py:451(_get_objects) 343406 6.134 0.000 33.326 0.000 ImportDataIntoMongo.py:162(<dictcomp>) 346 0.396 0.001 30.665 0.089 c:\python27\lib\site-packages\pymongo\collection.py:240(insert) EDIT4 I have parallelized the code, but the results are still not very encouraging. I am convinced it could be done better. See my own answer to this question for details.

    Read the article

  • Empty Structures compile in VB 10+

    - by Mark Hurd
    This is at least a documentation error, if not a bug. In VB.NET prior to .NET 4.0 (i.e. VB.NET 7 through 9) an empty Structure declaration fails at compile-time with error BC30281: Structure 'MySimpleEmpty' must contain at least one instance member variable or Event declaration. E.g. The following two structures compile successfully in VB10, and not prior: Structure MySimpleEmpty End Structure Public Structure AnotherEmpty Public Const StillEmpty As Boolean = True End Structure I note the documentation for the Error BC30281 stops at VB9, but the documentation for the Structure statement still has the datamemberdeclarations as required even as of VB11 (.NET 4.5 VS2012). These two Structures compile in VB11 (VS2012) as well. (Thanks John Woo.) Is there some blog entry or documentation confirming this is an intended change or a bug in VB10 and later?

    Read the article

  • Powershell equivilent of python's if __name__ == '__main__':

    - by Mark Mascolino
    I am really fond of python's capability to do things like this: if __name__ == '__main__': #setup testing code here #or setup a call a function with parameters and human format the output #etc... This is nice because I can treat a Python script file as something that can be called from the command line but it remains available for me to import its functions and classes into a separate python script file easily without triggering the default "run from the command line behavior". Does Powershell have a similar facility that I could exploit? And if it doesn't how should I be organizing my library of function files so that i can easily execute some of them while I am developing them?

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >