Search Results

Search found 68155 results on 2727 pages for 'data security'.

Page 87/2727 | < Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >

  • SINGLE SIGN ON SECURITY THREAT! FACEBOOK access_token broadcast in the open/clear

    - by MOKANA
    Subsequent to my posting there was a remark made that this was not really a question but I thought I did indeed postulate one. So that there is no ambiquity here is the question with a lead in: Since there is no data sent from Facebook during the Canvas Load process that is not at some point divulged, including the access_token, session and other data that could uniquely identify a user, does any one see any other way other than adding one more layer, i.e., a password, sent over the wire via HTTPS along with the access_toekn, that will insure unique untampered with security by the user? Using Wireshark I captured the local broadcast while loading my Canvas Application page. I was hugely surprised to see the access_token broadcast in the open, viewable for any one to see. This access_token is appended to any https call to the Facebook OpenGraph API. Using facebook as a single click log on has now raised huge concerns for me. It is stored in a session object in memory and the cookie is cleared upon app termination and after reviewing the FB.Init calls I saw a lot of HTTPS calls so I assumed the access_token was always encrypted. But last night I saw in the status bar a call from what was simply an http call that included the App ID so I felt I should sniff the Application Canvas load sequence. Today I did sniff the broadcast and in the attached image you can see that there are http calls with the access_token being broadcast in the open and clear for anyone to gain access to. Am I missing something, is what I am seeing and my interpretation really correct. If any one can sniff and get the access_token they can theorically make calls to the Graph API via https, even though the call back would still need to be the site established in Facebook's application set up. But what is truly a security threat is anyone using the access_token for access to their own site. I do not see the value of a single sign on via Facebook if the only thing that was established as secure was the access_token - becuase for what I can see it clearly is not secure. Access tokens that never have an expire date do not change. Access_tokens are different for every user, to access to another site could be held tight to just a single user, but compromising even a single user's data is unacceptable. http://www.creatingstory.com/images/InTheOpen.png Went back and did more research on this: FINDINGS: Went back an re ran the canvas application to verify that it was not any of my code that was not broadcasting. In this call: HTTP GET /connect.php/en_US/js/CacheData HTTP/1.1 The USER ID is clearly visible in the cookie. So USER_ID's are fully visible, but they are already. Anyone can go to pretty much any ones page and hover over the image and see the USER ID. So no big threat. APP_ID are also easily obtainable - but . . . http://www.creatingstory.com/images/InTheOpen2.png The above file clearly shows the FULL ACCESS TOKEN clearly in the OPEN via a Facebook initiated call. Am I wrong. TELL ME I AM WRONG because I want to be wrong about this. I have since reset my app secret so I am showing the real sniff of the Canvas Page being loaded. Additional data 02/20/2011: @ifaour - I appreciate the time you took to compile your response. I am pretty familiar with the OAuth process and have a pretty solid understanding of the signed_request unpacking and utilization of the access_token. I perform a substantial amount of my processing on the server and my Facebook server side flows are all complete and function without any flaw that I know of. The application secret is secure and never passed to the front end application and is also changed regularly. I am being as fanatical about security as I can be, knowing there is so much I don’t know that could come back and bite me. Two huge access_token issues: The issues concern the possible utilization of the access_token from the USER AGENT (browser). During the FB.INIT() process of the Facebook JavaScript SDK, a cookie is created as well as an object in memory called a session object. This object, along with the cookie contain the access_token, session, a secret, and uid and status of the connection. The session object is structured such that is supports both the new OAuth and the legacy flows. With OAuth, the access_token and status are pretty much al that is used in the session object. The first issue is that the access_token is used to make HTTPS calls to the GRAPH API. If you had the access_token, you could do this from any browser: https://graph.facebook.com/220439?access_token=... and it will return a ton of information about the user. So any one with the access token can gain access to a Facebook account. You can also make additional calls to any info the user has granted access to the application tied to the access_token. At first I thought that a call into the GRAPH had to have a Callback to the URL established in the App Setup, but I tested it as mentioned below and it will return info back right into the browser. Adding that callback feature would be a good idea I think, tightens things up a bit. The second issue is utilization of some unique private secured data that identifies the user to the third party data base, i.e., like in my case, I would use a single sign on to populate user information into my database using this unique secured data item (i.e., access_token which contains the APP ID, the USER ID, and a hashed with secret sequence). None of this is a problem on the server side. You get a signed_request, you unpack it with secret, make HTTPS calls, get HTTPS responses back. When a user has information entered via the USER AGENT(browser) that must be stored via a POST, this unique secured data element would be sent via HTTPS such that they are validated prior to data base insertion. However, If there is NO secured piece of unique data that is supplied via the single sign on process, then there is no way to guarantee unauthorized access. The access_token is the one piece of data that is utilized by Facebook to make the HTTPS calls into the GRAPH API. it is considered unique in regards to BOTH the USER and the APPLICATION and is initially secure via the signed_request packaging. If however, it is subsequently transmitted in the clear and if I can sniff the wire and obtain the access_token, then I can pretend to be the application and gain the information they have authorized the application to see. I tried the above example from a Safari and IE browser and it returned all of my information to me in the browser. In conclusion, the access_token is part of the signed_request and that is how the application initially obtains it. After OAuth authentication and authorization, i.e., the USER has logged into Facebook and then runs your app, the access_token is stored as mentioned above and I have sniffed it such that I see it stored in a Cookie that is transmitted over the wire, resulting in there being NO UNIQUE SECURED IDENTIFIABLE piece of information that can be used to support interaction with the database, or in other words, unless there were one more piece of secure data sent along with the access_token to my database, i.e., a password, I would not be able to discern if it is a legitimate call. Luckily I utilized secure AJAX via POST and the call has to come from the same domain, but I am sure there is a way to hijack that. I am totally open to any ideas on this topic on how to uniquely identify my USERS other than adding another layer (password) via this single sign on process or if someone would just share with me that I read and analyzed my data incorrectly and that the access_token is always secure over the wire. Mahalo nui loa in advance.

    Read the article

  • yum update works but yum --security update fails to work in Fedora 12

    - by bobo
    I had already installed the yum-security before. And I was going to do an update by entering the following command: [root@localhost /]# yum update Loaded plugins: presto, priorities, refresh-packagekit, security Skipping security plugin, no data Setting up Update Process Resolving Dependencies Skipping security plugin, no data --> Running transaction check ---> Package eject.i686 0:2.1.5-17.fc12 set to be updated ---> Package glibc.i686 0:2.11.1-4 set to be updated ---> Package glibc-common.i686 0:2.11.1-4 set to be updated ---> Package glibc-devel.i686 0:2.11.1-4 set to be updated ---> Package glibc-headers.i686 0:2.11.1-4 set to be updated ---> Package gnome-themes.noarch 0:2.28.1-3.fc12 set to be updated ---> Package gtk2.i686 0:2.18.9-3.fc12 set to be updated ---> Package gtk2-immodule-xim.i686 0:2.18.9-3.fc12 set to be updated ---> Package kernel-PAE.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAE-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-PAEdebug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-debug-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-devel.i686 0:2.6.32.11-99.fc12 set to be installed ---> Package kernel-firmware.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package kernel-headers.i686 0:2.6.32.11-99.fc12 set to be updated ---> Package libnetfilter_conntrack.i686 0:0.0.101-1.fc12 set to be updated ---> Package media-player-info.noarch 0:5-1.fc12 set to be updated ---> Package nscd.i686 0:2.11.1-4 set to be updated ---> Package perf.noarch 0:2.6.32.11-99.fc12 set to be updated ---> Package rhythmbox.i686 0:0.12.6-5.fc12 set to be updated ---> Package sysvinit-tools.i686 0:2.87-3.dsf.fc12 set to be updated --> Finished Dependency Resolution --> Running transaction check ---> Package kernel-PAE.i686 0:2.6.31.12-174.2.3.fc12 set to be erased --> Finished Dependency Resolution Dependencies Resolved ================================================================================ Package Arch Version Repository Size ================================================================================ Installing: kernel-PAE i686 2.6.32.11-99.fc12 updates 20 M kernel-PAE-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-PAEdebug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-debug-devel i686 2.6.32.11-99.fc12 updates 6.2 M kernel-devel i686 2.6.32.11-99.fc12 updates 6.1 M Updating: eject i686 2.1.5-17.fc12 updates 49 k glibc i686 2.11.1-4 updates 4.2 M glibc-common i686 2.11.1-4 updates 14 M glibc-devel i686 2.11.1-4 updates 953 k glibc-headers i686 2.11.1-4 updates 590 k gnome-themes noarch 2.28.1-3.fc12 updates 1.5 M gtk2 i686 2.18.9-3.fc12 updates 3.2 M gtk2-immodule-xim i686 2.18.9-3.fc12 updates 60 k kernel-firmware noarch 2.6.32.11-99.fc12 updates 968 k kernel-headers i686 2.6.32.11-99.fc12 updates 749 k libnetfilter_conntrack i686 0.0.101-1.fc12 updates 37 k media-player-info noarch 5-1.fc12 updates 32 k nscd i686 2.11.1-4 updates 189 k perf noarch 2.6.32.11-99.fc12 updates 79 k rhythmbox i686 0.12.6-5.fc12 updates 4.0 M sysvinit-tools i686 2.87-3.dsf.fc12 updates 58 k Removing: kernel-PAE i686 2.6.31.12-174.2.3.fc12 @updates 72 M Transaction Summary ================================================================================ Install 5 Package(s) Upgrade 16 Package(s) Remove 1 Package(s) Reinstall 0 Package(s) Downgrade 0 Package(s) Total download size: 75 M Is this ok [y/N]: But then I changed my mind, I decided to do a security-only update instead of a full update, so I entered the following command: [root@localhost /]# yum --security update Loaded plugins: presto, priorities, refresh-packagekit, security Setting up Update Process Resolving Dependencies Limiting packages to security relevant ones http://download.fedoraproject.org/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cuhk.edu.hk/pub/linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.riken.jp/Linux/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.cse.iitk.ac.in/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.isu.net.sa/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. ftp://ftp.chu.edu.tw/linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno -1] Metadata file does not match checksum Trying other mirror. http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.yandex.ru/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://linus.iyte.edu.tr/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.jaist.ac.jp/pub/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.kddilabs.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/packages/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://www.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://srv2.ftp.ne.jp/Linux/distributions/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.rhd.ru/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.163.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirror.nus.edu.sg/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.yz.yamagata-u.ac.jp/pub/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.linux.org.tr/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://mirrors.cytanet.com.cy/linux/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://fedoramirror.hnsdc.com/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.twaren.net/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://c147.twaren.net/pub/Linux/Fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.mirror.tw/pub/fedora/linux/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ftp.cs.pu.edu.tw/Linux/Fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz: [Errno 14] HTTP Error 416 : http://ubuntu.cn99.com/fedora/updates/12/i386/repodata/updateinfo.xml.gz Trying other mirror. Error: failure: repodata/updateinfo.xml.gz from updates: [Errno 256] No more mirrors to try. You could try using --skip-broken to work around the problem ^C[root@localhost /]# As it can be seen in the output, when I run the yum --security update command, it did show the Limiting packages to security relevant ones message so it's aware of the option. But I don't know why it keeps reporting the http error 416. I searched in google and found the following description of the error but it doesn't seem to help much. HTTP ERROR 416 - Requested Range Not Satisfiable A 416 status code indicates that the server was unable to fulfill the request. This may be, for example, because the client asked for the 800th-900th bytes of a document, but the document was only 200 bytes long. It suggests me to use the --skip-broken option, I tried and the output is the same. I already tested many times, it just doesn't work when the --security option is used. What could be the possible cause for this problem?

    Read the article

  • Transferring data from Salesforce using Apex Data Loader to Oracle

    - by Barret
    While attempting to transfer data from Salesforce using Apex Data Loader to Oracle Keep getting the following error: 26937 [databaseAccountExtract] FATAL com.salesforce.dataloader.dao.database.Data baseContext - Error getting value for SQL parameter: nkey__c. Please make sure that the value exists in the configuration file or is passed in. Database conf iguration: insertAccount. The database-conf.xml has the following beans: <bean id="insertAccount" class="com.salesforce.dataloader.dao.database.DatabaseConfig" singleton="true"> <property name="sqlConfig" ref="insertAccountSql"/> <property name="dataSource" ref="dbDataSource"/> </bean> <bean id="insertAccountSql" class="com.salesforce.dataloader.dao.database.SqlConfig" singleton="true"> <property name="sqlString"> <value> INSERT INTO VANTROPO.SF_ACCOUNTCHANNEL (nkey__c) VALUES (@nkey__c@) </value> </property> <property name="sqlParams"> <map> <entry key="nkey__c" value="java.lang.String"/> </map> </property> </bean> The SDL (mapping file) has the following values: # Account Insert Mapping values for query from Salesforce (left) and insert/update to Oracle (right) # SalesforceFieldName=OracleFieldName nkey__c=NKEY__C Any help appreciated.

    Read the article

  • Error using Dynamic Data Filtering: missing datasource

    - by sebastiaan
    I am trying to use the ASP.NET Dynamic Data Filtering project, but I'm running into a problem during the configuration. I'm following the instructions on the author's blog, and everything works like described. Then it tells me to change the datasource using the designer view. I am told to select the "GridDataSource" in the "Configure data source" wizard. This option is not there though. I get all of the classes in my project, including the DataContext that was generated by Linq. When I choose "Show only DataContext objects", the dropdown ("Choose your context object:") is completely empty. When I turn of the checkbox and choose my DataContext class, I get asked which table I want and all that. But, as the whole purpose of a Dynamic Data site is NOT to use one single table, that's not much help. So I've looked at the instructions again and copied the resulting datasource from the example: <asp:DynamicLinqDataSource ID="GridDataSource" runat="server" EnableDelete="True" EnableUpdate="True"></asp:DynamicLinqDataSource> Which is exactly what I had, without the "WhereParameters" nodes in there. Now, when I run the list page however, I get an exception about a missing datasource from the filtering component. Of course when I remove the DynamicFilterRepeater, it works again. This is the meat of the exception: [InvalidOperationException: Missing DataSource] Catalyst.Web.DynamicData.DynamicFilterRepeater.GetTable() in D:\Catalyst\Projects\DynamicData\Project\Trunk\DynamicData\DynamicData\DynamicFilterRepeater.cs:74 Catalyst.Web.DynamicData.DynamicFilterRepeater.GetFilters() in D:\Catalyst\Projects\DynamicData\Project\Trunk\DynamicData\DynamicData\DynamicFilterRepeater.cs:81 Catalyst.Web.DynamicData.DynamicFilterRepeater.OnInit(EventArgs e) in D:\Catalyst\Projects\DynamicData\Project\Trunk\DynamicData\DynamicData\DynamicFilterRepeater.cs:106 How do I make the DynamicFilterRepeater recognize my datasource? I'm using VS2010 Pro, on a Win7 machine.

    Read the article

  • RESTful WCF Data Service Authentication

    - by Adrian Grigore
    Hi, I'd like to implement a REST api to an existing ASP.NET MVC website. I've managed to set up WCF Data services so that I can browse my data, but now the question is how to handle authentication. Right now the data service is secured via the site's built in forms authentication, and that's ok when accessing the service from AJAX forms. However, It's not ideal for a RESTful api. What I would like as an alternative to forms authentication is for the users to simply embed the user name and password into the url to the web service or as request parameters. For example, if my web service is usually accessible as http://localhost:1234/api.svc I'd like to be able to access it using the url http://localhost:1234/api.svc/{login}/{password} So, my questions are as follows: Is this a sane approach? If yes, how can I implement this? It seems trivial redirecting GET requests so that the login and password are attached as GET parameters. I also know how to inspect the http context and use those parameters to filter the results. But I am not sure if / how the same approach could be applied to POST, PUT and DELETE requests. Thanks, Adrian

    Read the article

  • asp.net Dynamic Data Site with own MetaData

    - by loviji
    Hello, I'm searching info about configuring own MetaData in asp.NET Dynamic Site. For example. I have a table in MS Sql Server with structure shown below: CREATE TABLE [dbo].[someTable]( [id] [int] NOT NULL, [pname] [nvarchar](20) NULL, [FullName] [nvarchar](50) NULL, [age] [int] NULL) and I there are 2 Ms Sql tables (I've created), sysTables and sysColumns. sysTables: ID sysTableName TableName TableDescription 1 | someTable |Persons |All Data about Persons in system sysColumns: ID TableName sysColumnName ColumnName ColumnDesc ColumnType MUnit 1 |someTable | sometable_pname| Name | Persona Name(ex. John)| nvarchar(20) | null 2 |someTable | sometable_Fullname| Full Name | Persona Name(ex. John Black)| nvarchar(50) | null 3 |someTable | sometable_age| age | Person age| int | null I want that, in Details/Edit/Insert/List/ListDetails pages use as MetaData sysColumns and sysTableData. Because, for ex. in DetailsPage fullName, it is not beatiful as Full Name . someIdea, is it possible? thanks Updated:: In List Page to display data from sysTables (metaData table) I've modified <h2 class="DDSubHeader"><%= tableName%></h2>. public string tableName; protected void Page_Init(object sender, EventArgs e) { table = DynamicDataRouteHandler.GetRequestMetaTable(Context); //added by me uqsikDataContext sd=new uqsikDataContext(); tableName = sd.sysTables.Where(n => n.sysTableName == table.DisplayName).FirstOrDefault().TableName; //end GridView1.SetMetaTable(table, table.GetColumnValuesFromRoute(Context)); GridDataSource.EntityTypeName = table.EntityType.AssemblyQualifiedName; if (table.EntityType != table.RootEntityType) { GridQueryExtender.Expressions.Add(new OfTypeExpression(table.EntityType)); } } so, what about sysColums? How can I get Data from my sysColumns table?

    Read the article

  • Jasper report exports empty data in PDF format when there is more data

    - by stanley
    I have a report to be exported in excel, pdf and word using jasper reports. I use xml file as the DataSource for the report, but when the data increases jasper report exports empty file in only for PDF format, when i reduce the data content it export the data available correctly. is there any limitation to pdf size? , how can we manage the size in jasper reports from java? My jrxml is really big, so i cannot add it here, i have added my java code which i use to export the content:- JRAbstractExporter exporter = null; if (format.equals("pdf")) { exporter = new JRPdfExporter(); jasperPrint.setPageWidth(Integer.parseInt( pWidth )); } else if (format.equals("xls")) { exporter = new JRXlsExporter(); } else if (format.equals("doc")) { jasperPrint.setPageWidth(Integer.parseInt( pWidth )); } exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint); exporter.setParameter(JRExporterParameter.OUTPUT_STREAM, outputStream_); exporter.exportReport(); contents = outputStream_.toByteArray(); response.setContentType("application/"+format); response.addHeader("Content-disposition", "attachment;filename=" + name.toString() + "."+format);

    Read the article

  • Data loss when downloading data from LDAP server

    - by Ricky D'Amelio
    Hi there. This question comes from a previous one I asked about handling NSData objects: http://stackoverflow.com/questions/2453785/converting-nsdata-to-an-nsstring-representation-is-failing. I have reached the point where I am taking an NSImage, turning it into NSData and uploading those data bytes to the LDAP server. I am doing this like so; //connected successfully to LDAP server above... struct berval photo_berval; struct berval *jpegPhoto_values[2]; photo_berval.bv_len = [photo length]; photo_berval.bv_val = [photo bytes]; jpegPhoto_values[0] = &photo_berval; jpegPhoto_values[1] = NULL; mod.mod_type = "jpegPhoto"; mod.mod_op = LDAP_MOD_REPLACE|LDAP_MOD_BVALUES; mod.mod_bvalues = jpegPhoto_values; mods[0] = &mod; mods[1] = NULL; //perform the modify operation rc = ldap_modify_ext_s(ld, givenModifyEntry, mods, NULL, NULL); That happens with no errors, and you can see a big blob of data when you're in the command line. My problem is, when I go to access the same data at a later stage, I am getting an image file back that's about 120 times smaller than the original image. //find the jpegPhoto attribute photoA = ldap_first_attribute(ld, photoE, &photoBer); while (strcasecmp(photoA, "jpegphoto") != 0) { photoA = ldap_next_attribute(ld, photoE, photoBer); } //get the value of the attribute if ((list_of_photos = ldap_get_values_len(ld, photoE, photoA)) != NULL) { //get the first JPEG photo_data = *list_of_photos[0]; selectedPictureData = [NSData dataWithBytes:&photo_data length:sizeof(photo_data)]; [selectedPictureData writeToFile:@"/Users/username/Desktop/Photo 2.jpg" atomically:YES]; NSLog (@"%@", selectedPictureData); Has anyone successfully done this before or can anyone see what I might be doing wrong? I appreciate anyone's help. Sorry to post so many questions! Ricky.

    Read the article

  • Pros and cons of making database IDs consistent and "readable"

    - by gmale
    Question Is it a good rule of thumb for database IDs to be "meaningless?" Conversely, are there significant benefits from having IDs structured in a way where they can be recognized at a glance? What are the pros and cons? Background I just had a debate with my coworkers about the consistency of the IDs in our database. We have a data-driven application that leverages spring so that we rarely ever have to change code. That means, if there's a problem, a data change is usually the solution. My argument was that by making IDs consistent and readable, we save ourselves significant time and headaches, long term. Once the IDs are set, they don't have to change often and if done right, future changes won't be difficult. My coworkers position was that IDs should never matter. Encoding information into the ID violates DB design policies and keeping them orderly requires extra work that, "we don't have time for." I can't find anything online to support either position. So I'm turning to all the gurus here at SA! Example Imagine this simplified list of database records representing food in a grocery store, the first set represents data that has meaning encoded in the IDs, while the second does not: ID's with meaning: Type 1 Fruit 2 Veggie Product 101 Apple 102 Banana 103 Orange 201 Lettuce 202 Onion 203 Carrot Location 41 Aisle four top shelf 42 Aisle four bottom shelf 51 Aisle five top shelf 52 Aisle five bottom shelf ProductLocation 10141 Apple on aisle four top shelf 10241 Banana on aisle four top shelf //just by reading the ids, it's easy to recongnize that these are both Fruit on Aisle 4 ID's without meaning: Type 1 Fruit 2 Veggie Product 1 Apple 2 Banana 3 Orange 4 Lettuce 5 Onion 6 Carrot Location 1 Aisle four top shelf 2 Aisle four bottom shelf 3 Aisle five top shelf 4 Aisle five bottom shelf ProductLocation 1 Apple on aisle four top shelf 2 Banana on aisle four top shelf //given the IDs, it's harder to see that these are both fruit on aisle 4 Summary What are the pros and cons of keeping IDs readable and consistent? Which approach do you generally prefer and why? Is there an accepted industry best-practice?

    Read the article

  • Project setup for an ADO.NET/WCF DataService

    - by Slauma
    I'd like to implement a ADO.NET/WCF DataService and I am wondering what's the best way to setup a project in VS2008 SP1 for this purpose. Currently I have an ASP.NET web application project (not of "WebSite" project type). The data access layer is an Entity model (EF version 1) with SQL Server database. I have the Entity Model in a separate DLL project and the web application project references to this assembly for all data accesses. The ADO.NET/WCF DataService needs to communicate with the Entity model/database as well. It has to be hosted on the same web server (IIS 7.5) together with the web application. Since the DataService is not directly related to that specific web application (though it will provide and modify data from/in the same database the web application uses as well) my basic idea was to separate the DataService in its own new project (which also references the Entity Model DLL). Now I have seen that there is no project type "ADO.NET/WCF DataService" in VS2008 SP1. It seems only possible to add a DataService as an element to other existing projects, for instance Web Application projects. Why isn't there a separate DataService project type? Does this mean now that I have to add the DataService as an element to my Web Application project? Or shall I create a new Web Application project and add a DataService to it? (I could delete the pregenerated default.aspx since I do not need any web pages in this project.) What's the best way? Thank you for suggestions in advance!

    Read the article

  • Core Data 1-to-many relationship: List all related objects as section header in UITableView

    - by Snej
    Hi: I struggle with Core Data on the iPhone about the following: I have a 1-to-many relationship in Core Data. Assume the entities are called recipe and category. A category can have many recipes. I accomplished to get all recipes listed in a UITableView with section headers named after the category. What i want to achieve is to list all categories as section header, even those which have no recipe: category1 <--- this one should be displayed too category2 recipe_x recipe_y category3 recipe_z NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Recipe" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; [fetchRequest setFetchBatchSize:10]; NSSortDescriptor *sortDescriptor1 = [[NSSortDescriptor alloc] initWithKey:@"category.categoryName" ascending:YES]; NSSortDescriptor *sortDescriptor2 = [[NSSortDescriptor alloc] initWithKey:@"recipeName" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor1,sortDescriptor2, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:@"category.categoryName" cacheName:@"Recipes"]; What is the most elegant way to achieve this with core data?

    Read the article

  • Empty data problem - data layer or DAL?

    - by luckyluke
    I designing the new App now and giving the following question a lot of thought. I consume a lot of data from the warehouse, and the entities have a lot of dictionary based values (currency, country, tax-whatever data) - dimensions. I cannot be assured though that there won't be nulls. So I am thinking: create an empty value in each of teh dictionaries with special keyID - ie. -1 do the ETL (ssis) do the correct stuff and insert -1 where it needs to let the DAL know that -1 is special (Static const whatever thing) don't care in the code to check for nullness of dictionary entries because THEY will always have a value But maybe I should be thinking: import data AS IS let the DAL do the thinking using empty record Pattern still don't care in the code because business layer will have what it needs from DAL. I think is more of a approach thing but maybe i am missing something important here... What do You think? Am i clear? Please don't confuse it with empty record problem. I do use emptyCustomer think all the time and other defaults too.

    Read the article

  • Core Data - Entity Relationships Not Working as expected

    - by slimms
    I have set up my data model in xcode like so EntityA AttA1 AttA2 EntityB AttB1 AttB2 AttB3 I then set up the relationships EntityA Name: rlpToEntityB Destination: EntityB Inverse: rlpToEntityA To Many: Checked EntityB Name: rlpToEntityA Destination: EntityA Inverse: rlpToEntityB To Many: UnChecked i.e. relationship between the two where Each one of EntityA can have many EntityB's It is my understanding that if i fetch a subset of EntityB's I can then retrieve the values for the related EntityA's. I have this working so that i can retrieve the EntityB values using NSManagedObject *objMO = [fetchedResultsController objectAtIndexPath:indexPath]; strValueFromEntityB = [objMO valueForKey:@"AttB1"]; However, if I try to retrieve a related value from EntityA by doing the following strValueFromEntityA = [objMO valueForKey:@"AttA1"]; I get the error "The entity EntityB is not Key value coding-compliant for the key Atta1" Not surprisingly i suppose if i switch things around to fetch from EntityA i cannot access attributes of EntityB So it appears the defined relationshipare being ignored. Can anyone spot what i am doing wrong? I confess im very new to iPhone programming and especially to Core Data so please go easy on me and provide verbose explanations or point me in the direction a specific resource. I have downloaded the apple sample apps (Core Data Books, Top Songs and recipes) but I still can't work this out. Thanks in advance, Nev.

    Read the article

  • WCF Data Services - neither .Expand or .LoadProperty seems to do what I need

    - by TomK
    I am building a school management app where they track student tardiness and absences. I've got three entities to help me in this. A Students entity (first name, last name, ID, etc.); a SystemAbsenceTypes entity with SystemAbsenceTypeID values for Late, Absent-with-Reason, Absent-without-Reason; and a cross-reference table called StudentAbsences (matching the student IDs with the absence-type ID, plus a date, and a Notes field). What I want to do is query my entities for a given student, and then add up the number of each kind of Absence, for a given date range. I prepare my currentStudent object without a problem, then I do this... Me.Data.LoadProperty(currentStudent, "StudentAbsences") 'Loads the cross-ref data lblDaysLate.Text = (From ab In currentStudent.StudentAbsences Where ab.SystemAbsenceTypes.SystemAbsenceTypeID = Common.enuStudentAbsenceTypes.Late).Count.ToString ...and this second line fails, complaining it has no value for an object. I presume the problem is that while it DOES see that there are (let's say) four absences for the currentStudent (ie, currentStudent.StudentAbsences.Count = 4) -- it can't yet "peer into" each one of the absences to look at its type. How do I use .Expand or .LoadProperty to make this happen? I tried fiddling with .LoadProperty but it doesn't take a two-level syntax like so... Data.LoadProperty(currentStudent,"StudentAbsences.SystemAbsenceTypeID") or the like. Is there some other technique?

    Read the article

  • NSUndoManager with Core Data - Redo not working

    - by CJ
    I have a Core Data document-based app which support undo/redo via the built-in NSUndoManager associated with the NSManagedObjectContext. I have a few actions set up which perform numerous tasks within Core Data, wrap all these tasks into an undo group via beginUndoGrouping/endUndoGrouping, and are processed by the NSUndoManager. Undo works fine. I can perform several successive actions, and each then undo each one of them successively and my app's state is maintained correctly. However, the "Redo" menu item is never enabled. This means that the NSUndoManager is telling the menu that there are no items to redo. I am wondering why the NSUndoManager is seemingly forgetting about items once they are undone, and not allowing redos to occur? One thing I should mention is that I'm disabling undo registration after a document is opened/created. When I perform an action, I call enableUndoRegistration, beginUndoGrouping, perform the action, then call processPendingChanges, setActionName:, endUndoGrouping, and finally disableUndoRegistration. This makes sure that only specific actions are undoable, and any other data changes I make outside of these go unnoticed to the NSUndoManager. This may be a part of the issue, but if so I'm wondering why it's affecting redo? Thanks in advance.

    Read the article

  • Regressing panel data in SAS.

    - by John
    Hey Guys, thanks to your help I succesfully managed all my databases! I am now looking at a panel data set on which I have to regress. Since I only started my Phd this semester together with the econometrics courses I am still new to many statistic applications and regression methods. I want to do a simple regression as in Y = x1 x2 x3 etc, now I already browsed through some literature and found that for panel data it's common to do a fixed effects regression. Also, my Y variable only has positive values so I was thinking in the direction of a Tobit model? I'm doing some research concerning the coverage of analysts in the financial business. My independent variable is the coverage of analysts on a certain firm, so per observation i have 1 analyst and 1 firm, together with different characteristics(market cap and betas etc) of the firm. All this data is monthly. As coverage cannot become negative (only 0) I was thinking of a Tobit model? Do you guys have any ideas what would be a good regression method? Or have some good sources (e books, written books, through university I have access to almost anything concerning my field of work) of information (cause I do have to learn these things for future research)? Thanks!

    Read the article

  • Creating LINQ to SQL Data Models' Data Contexts with ASP.NET MVC

    - by Maxim Z.
    I'm just getting started with ASP.NET MVC, mostly by reading ScottGu's tutorial. To create my database connections, I followed the steps he outlined, which were to create a LINQ-to-SQL dbml model, add in the database tables through the Server Explorer, and finally to create a DataContext class. That last part is the part I'm stuck on. In this class, I'm trying to create methods that work around the exposed data. Following the example in the tutorial, I created this: using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MySite.Models { public partial class MyDataContext { public List<Post> GetPosts() { return Posts.ToList(); } public Post GetPostById(int id) { return Posts.Single(p => p.ID == id); } } } As you can see, I'm trying to use my Post data table. However, it doesn't recognize the "Posts" part of my code. What am I doing wrong? I have a feeling that my problem is related to my not adding the data tables correctly, but I'm not sure. Thanks in advance.

    Read the article

  • Storing high precision latitude/longitude numbers in iOS Core Data

    - by Bryan
    I'm trying to store Latitude/Longitudes in core data. These end up being anywhere from 6-20 digit precision. And for whatever reason, i had them as floats in Core Data, its rounding them and not giving me the exact values back. I tried "decimal" type, with no luck either. Are NSStrings my only other option? EDIT NSManagedObject: @interface Event : NSManagedObject { } @property (nonatomic, retain) NSDecimalNumber * dec; @property (nonatomic, retain) NSDate * timeStamp; @property (nonatomic, retain) NSNumber * flo; @property (nonatomic, retain) NSNumber * doub; Here's the code for a sample number that I store into core data: NSNumber *n = [NSDecimalNumber decimalNumberWithString:@"-97.12345678901234567890123456789"]; Code to access it again: NSNumber *n = [managedObject valueForKey:@"dec"]; NSNumber *f = [managedObject valueForKey:@"flo"]; NSNumber *d = [managedObject valueForKey:@"doub"]; Printed values: Printing description of n: -97.1234567890124 Printing description of f: <CFNumber 0x603f250 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type} Printing description of d: <CFNumber 0x6040310 [0xfef3e0]>{value = -97.12345678901235146441, type = kCFNumberFloat64Type}

    Read the article

  • Pass a data.frame column name to a function

    - by Kevin Middleton
    I'm trying to write a function to accept a data.frame (x) and a column from it. The function performs some calculations on x and later returns another data.frame. I'm stuck on the best-practices method to pass the column name to the function. The two minimal examples fun1 and fun2 below produce the desired result, being able to perform operations on x$column, using max() as an example. However, both rely on the seemingly (at least to me) inelegant (1) call to substitute() and possibly eval() and (2) the need to pass the column name as a character vector. fun1 <- function(x, column){ do.call("max", list(substitute(x[a], list(a = column)))) } fun2 <- function(x, column){ max(eval((substitute(x[a], list(a = column))))) } df <- data.frame(A = 1:20, B = rnorm(10)) fun1(df, "B") fun2(df, "B") I would like to be able to call the function as fun(df, B), for example. Other options I have considered but have not tried: Pass column as an integer of the column number. I think this would avoid substitute(). Ideally, the function could accept either. with(x, get(column)), but, even if it works, I think this would still require substitute Make use of formula() and match.call(), neither of which I have much experience with. Subquestion: Is do.call() preferred over eval()? Thanks, Kevin

    Read the article

  • LOAD DATA INFILE with variables

    - by Hasitha
    I was tring to use the LOAD DATA INFILE as a sotred procedure but it seems it cannot be done. Then i tried the usual way of embedding the code to the application itself like so, conn = new MySqlConnection(connStr); conn.Open(); MySqlCommand cmd = new MySqlCommand(); cmd = conn.CreateCommand(); string tableName = getTableName(serverName); string query = "LOAD DATA INFILE '" + fileName + "'INTO TABLE "+ tableName +" FIELDS TERMINATED BY '"+colSep+"' ENCLOSED BY '"+colEncap+"' ESCAPED BY '"+colEncap+"'LINES TERMINATED BY '"+colNewLine+"' ("+colUpFormat+");"; cmd.CommandText = query; cmd.ExecuteNonQuery(); conn.Close(); The generated query that gets saved in the string variable query is, LOAD DATA INFILE 'data_file.csv'INTO TABLE tbl_shadowserver_bot_geo FIELDS TERMINATED BY ',' ENCLOSED BY '"' ESCAPED BY '"'LINES TERMINATED BY '\n' (timestamp,ip,port,asn,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,url,agent,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy,@dummy); But now when i run the program it gives an error saying, MySQLException(0x80004005) Parameter @dummy must be defined first. I dont know how to get around this, but when i use the same query on mysql itself directly it works fine. PLEASE help me..thank you very much :)

    Read the article

  • Using Constraints on Hierarchical Data in a Self-Referential Table

    - by pbarney
    Suppose you have the following table, intended to represent hierarchical data: +--------+-------------+ | Field | Type | +--------+-------------+ | id | int(10) | | parent | int(10) | | name | varchar(45) | +--------+-------------+ The table is self-referential in that the parent_id refers to id. So you might have the following data: +----+--------+---------------+ | id | parent | name | +----+--------+---------------+ | 1 | 0 | fruit | | 2 | 0 | vegetable | | 3 | 1 | apple | | 4 | 1 | orange | | 5 | 3 | red delicious | | 6 | 3 | granny smith | | 7 | 3 | gala | +----+--------+---------------+ Using MySQL, I am trying to impose a (self-referential) foreign key constraint upon the data to update on cascades and prevent deletion of fruit if they have "children." So I used the following: CREATE TABLE `idtlp_main`.`fruit` ( `id` INT(10) UNSIGNED NOT NULL AUTO_INCREMENT, `parent` INT(10) UNSIGNED, `name` VARCHAR(45) NOT NULL, PRIMARY KEY (`id`), CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE ON DELETE RESTRICT ) ENGINE = InnoDB; From what I understand, this should fit my requirements. (And parent must default to null to allow insertions, correct?) The problem is, if I change the id of a record, it will not cascade: Cannot delete or update a parent row: a foreign key constraint fails (`iddoc_main`.`fruit`, CONSTRAINT `fk_parent` FOREIGN KEY (`parent`) REFERENCES `fruit` (`id`) ON UPDATE CASCADE) What am I missing? Feel free to correct me if my terminology is screwed up... I'm new to constraints.

    Read the article

  • iPhone application update (using Core Data on Sqlite)

    - by owen
    I have an app which is using Core Data on Sqlite, Now I have a update which has some DB structure changes say adding a new table I know when an app get updated, it updates the app binary only, nothing on document directory will be changed. When the app gets updated and launchs at the first time and run [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:[self managedObjectModel]]; It will find the difference between the data model and DB structure in Sqlite and will throw an exception and quit. Error: "The model used to open the store is incompatible with the one used to create the store" So, can anyone here give me some idea how to update an app when there is a DB structure change? I think we can run a DB script to create that new table when it launchs the update at the first time. But if there are other changes like changing the type of some fields or deleting some fields, and we need to migrate the old data, this is really a headache. In this case, Is the only way to do is creating a new app? Is there anyone tried something similar like this?

    Read the article

  • Point data structure for a sketching application

    - by bebraw
    I am currently developing a little sketching application based on HTML5 Canvas element. There is one particular problem I haven't yet managed to find a proper solution for. The idea is that the user will be able to manipulate existing stroke data (points) quite freely. This includes pushing point data around (ie. magnet tool) and manipulating it at whim otherwise (ie. altering color). Note that the current brush engine is able to shade by taking existing stroke data in count. It's a quick and dirty solution as it just iterates the points in the current stroke and checks them against a distance rule. Now the problem is how to do this in a nice manner. It is extremely important to be able to perform efficient queries that return all points within given canvas coordinate and radius. Other features, such as space usage, should be secondary to this. I don't mind doing some extra processing between strokes while the user is not painting. Any pointers are welcome. :)

    Read the article

  • Algorithm detect repeating/similiar strings in a corpus of data -- say email subjects, in Python

    - by RizwanK
    I'm downloading a long list of my email subject lines , with the intent of finding email lists that I was a member of years ago, and would want to purge them from my Gmail account (which is getting pretty slow.) I'm specifically thinking of newsletters that often come from the same address, and repeat the product/service/group's name in the subject. I'm aware that I could search/sort by the common occurrence of items from a particular email address (and I intend to), but I'd like to correlate that data with repeating subject lines.... Now, many subject lines would fail a string match, but "Google Friends : Our latest news" "Google Friends : What we're doing today" are more similar to each other than a random subject line, as is: "Virgin Airlines has a great sale today" "Take a flight with Virgin Airlines" So -- how can I start to automagically extract trends/examples of strings that may be more similar. Approaches I've considered and discarded ('because there must be some better way'): Extracting all the possible substrings and ordering them by how often they show up, and manually selecting relevant ones Stripping off the first word or two and then count the occurrence of each sub string Comparing Levenshtein distance between entries Some sort of string similarity index ... Most of these were rejected for massive inefficiency or likelyhood of a vast amount of manual intervention required. I guess I need some sort of fuzzy string matching..? In the end, I can think of kludgy ways of doing this, but I'm looking for something more generic so I've added to my set of tools rather than special casing for this data set. After this, I'd be matching the occurring of particular subject strings with 'From' addresses - I'm not sure if there's a good way of building a data structure that represents how likely/not two messages are part of the 'same email list' or by filtering all my email subjects/from addresses into pools of likely 'related' emails and not -- but that's a problem to solve after this one. Any guidance would be appreciated.

    Read the article

  • What changes in Core Data after a save?

    - by Splash6
    I have a Core Data based mac application that is working perfectly well until I save a file. When I save a file it seems like something changes in core data because my original fetch request no longer fetches anything. This is the fetch request that works before saving but returns an empty array after saving. NSEntityDescription *outputCellEntityDescription = [NSEntityDescription entityForName:@"OutputCell" inManagedObjectContext:[[self document] managedObjectContext]]; NSFetchRequest *outputCellRequest = [[[NSFetchRequest alloc] init] autorelease]; [outputCellRequest setEntity:outputCellEntityDescription]; NSPredicate *outputCellPredicate = [NSPredicate predicateWithFormat:@"(cellTitle = %@)", outputCellTitle]; [outputCellRequest setPredicate:outputCellPredicate]; NSError *outputCellError = nil; NSArray *outputCellArray = [[[self document] managedObjectContext] executeFetchRequest:outputCellRequest error:&outputCellError]; I have checked with [[[self document] managedObjectContext] registeredObjects] to see that the object still exists after the save and nothing seems to have changed and the object still exists. It is probably something fairly basic but does anyone know what I might be doing wrong? If not can anyone give me any pointers to what might be different in the Core Data model after a save so I might have some clues why the fetch request stops working after saving?

    Read the article

< Previous Page | 83 84 85 86 87 88 89 90 91 92 93 94  | Next Page >